Nx Tip of the Week #3 - Many Ways to Create Arrays*
Published on
*tensors
In Nx
, the fundamental type is the Tensor
. You can think of a tensor as a multi-dimensional array, like the [numpy.ndarray](https://numpy.org/doc/stable/reference/generated/numpy.ndarray.html)
. For Elixir programmers, it’s easy to think of Nx.Tensor
as a list, or a list-of-lists, or a list-of-lists-of-lists, … and so on. This thought process is fine, but it might lead you to write code that’s overly dependent on Elixir lists, and less dependent on native (and more efficient) tensor manipulation methods. This post will focus mainly on how to create tensors using the Nx
API. For Elixir programmers, this should give you the tools to avoid writing unnecessary for-comprehensions and being overly dependent on lists.
From Numbers and Lists
We’ll start with a method you’ll probably be most tempted to use, but one you should usually try to avoid. The default tensor creation method is Nx.tensor/2
. You can use this method to create tensors from scalars, lists, and even other tensors:
iex> Nx.tensor(1)
#Nx.Tensor<
s64
1
>
iex> Nx.tensor([1.0, 2.0, 3.0])
#Nx.Tensor<
f32[3]
[1.0, 2.0, 3.0]
>
iex> Nx.tensor([[[[[[[[[[1,2]]]]]]]]]])
#Nx.Tensor<
s64[1][1][1][1][1][1][1][1][1][2]
[
[
[
[
[
[
[
[
[
[1, 2]
]
]
]
]
]
]
]
]
]
>
Notice Nx.tensor/2
infers the type and calculates the ship of your input list or scalar. By default, Nx.tensor/2
will create tensors with type s64
when the inputs are all integer types and f32
when the inputs are all float types. If it’s a mix, Nx.tensor/2
will merge to a higher type:
iex> Nx.tensor([1.0, 2])
#Nx.Tensor<
f32[2]
[1.0, 2.0]
>
You can also specify the input type and dimension names:
iex> Nx.tensor([1, 2, 3], type: {:bf, 16}, names: [:data])
#Nx.Tensor<
bf16[data: 3]
[1.0, 2.0, 3.0]
>
As well as the backend:
iex> Nx.tensor([1, 2, 3], backend: Torchx.Backend)
#Nx.Tensor<
s64[3]
[1, 2, 3]
>
Using Nx.tensor/2
is convenient, but is generally less efficient than other methods. Nx.Tensor
generally represents tensor data as binaries, so Nx.tensor/2
needs to iterate through the entire list and rewrite it to a binary. You should avoid this, if possible.
From Binaries
Instead of creating tensors from lists, you should try to create tensors from binaries. As I said before, tensor data is generally stored in a binary. This is because binaries are just C-byte arrays, so native manipulation is usually more efficient than with other data types. Often times, you will receive data, such as images, as bytes (see MNIST), so you’ll want to initialize a tensor directly from the input bytes. You can do this using Nx.from_binary/2
:
iex> Nx.from_binary(<<0::64-signed-native>>, {:s, 64})
#Nx.Tensor<
s64[1]
[0]
>
iex> Nx.from_binary(<<0::32-float-native>>, {:f, 32})
#Nx.Tensor<
f32[1]
[0.0]
>
Note: You’ll likely want to brush up on binary pattern matching, creation, and manipulation as you work with Nx
.
Notice Nx.from_binary/2
requires the input type and infers the shape as a flat list. It’s not really possible to infer the type directly from the input type. You can cast the type to pretty much anything, but this will likely lead to unexpected results:
iex> Nx.from_binary(<<1::64-float-native>>, {:f, 64})
#Nx.Tensor<
s64[1]
[1.0]
>
iex> Nx.from_binary(<<1::64-float-native>>, {:f, 32})
#Nx.Tensor<
f32[2]
[0.0, 1.875]
>
iex> Nx.from_binary(<<1::64-float-native>>, {:s, 64})
#Nx.Tensor<
s64[1]
[4607182418800017408]
>
iex> Nx.from_binary(<<1::64-float-native>>, {:u, 8})
#Nx.Tensor<
u8[8]
[0, 0, 0, 0, 0, 0, 240, 63]
>
Both the shape and values change based on the input type! This can lead to some unexpected bugs, you’ll need to ensure your input types line up to avoid unexpected behavior. Notice Nx.from_binary/2
always creates flat lists. If you have an input that you need to be multi-dimensional, you’ll want to use Nx.reshape/2
:
iex> t = Nx.from_binary(<<1::64-float-native>>, type: {:f, 64})
iex> Nx.reshape(t, {1, 1, 1, 1})
#Nx.Tensor<
f64[1][1][1][1]
[
[
[
[1]
]
]
]
>
Initially, you might be concerned about efficiency, but Nx.reshape/2
is actually just a meta operation. The implementation doesn’t move the underlying bytes at all, it just changes the shape property of the input tensor! A drawback of this approach is that you need to know the shape of your input data ahead of time. But, when you know the input shape and type, and you’re able to get raw bytes of data, you should prefer Nx.from_binary/2
over Nx.tensor/2
.
Broadcasting
If you’re familiar with NumPy, PyTorch or TensorFlow, you might initially be concerned that Nx
is missing something akin to np.full
or np.full_like
. Fortunately, you can achieve the same thing with Nx.broadcast/2
:
iex> zeros = Nx.broadcast(0, {2, 5})
#Nx.Tensor<
s64[2][5]
[
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]
]
>
iex> ones_like_zeros = Nx.broadcast(1, zeros)
#Nx.Tensor<
s64[2][5]
[
[1, 1, 1, 1, 1],
[1, 1, 1, 1, 1]
]
>
Thanks to scalar broadcasting, you can create full tensors from shapes and other tensors! If you want to dictate the output type, you should wrap the scalar in a call to Nx.tensor/2
:
iex> Nx.broadcast(Nx.tensor(0, type: {:bf, 16}), {2, 2})
#Nx.Tensor<
bf16[2][2]
[
[0.0, 0.0],
[0.0, 0.0]
]
>
Counting Up
Another useful tensor creation method is Nx.iota/2
. Nx.iota/2
is like [np.arange](https://numpy.org/doc/stable/reference/generated/numpy.arange.html)
- it counts up along a given axis:
iex> Nx.iota({2, 5}, axis: 1)
#Nx.Tensor<
s64[2][5]
[
[0, 1, 2, 3, 4],
[0, 1, 2, 3, 4]
]
>
If no axis is given, it will count up the entire tensor:
iex> Nx.iota({2, 5})
#Nx.Tensor<
s64[2][5]
[
[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]
]
>
As with most tensor creation methods, you can specify a type and names:
iex> Nx.iota({1}, type: {:bf, 16}, names: [:data]
#Nx.Tensor<
bf16[data: 1]
[0]
>
You can also pass another tensor as a shape:
iex> a = Nx.broadcast(0, {2, 5})
iex> Nx.iota(a)
#Nx.Tensor<
s64[2][5]
[
[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9]
]
>
If you want to evenly space the tensor, you can achieve that with multiplication:
iex> Nx.multiply(Nx.iota({2, 5}, axis: 1), 3)
#Nx.Tensor<
s64[2][5]
[
[0, 3, 6, 9, 12],
[0, 3, 6, 9, 12]
]
>
Nx.iota/2
can also be useful in creating other tensor creation methods, like eye
:
iex> Nx.equal(Nx.iota({3, 3}, axis: 0), Nx.iota({3, 3}, axis: 1))
#Nx.Tensor<
u8[3][3]
[
[1, 0, 0],
[0, 1, 0],
[0, 0, 1]
]
>
You’ll notice you can build a number of creation methods simply out of primitives like Nx.iota/2
!
Random Numbers
Nx
currently has two primitives for generating random numbers: Nx.random_uniform
and Nx.random_normal
. As with other creation methods, you can pass types, names, shapes, and tensors to create new randomly generated tensors:
iex> a = Nx.random_uniform({2, 2})
#Nx.Tensor<
f32[2][2]
[
[0.02345350757241249, 0.7847864031791687],
[0.11917673051357269, 0.040481213480234146]
]
>
iex> Nx.random_normal(a)
#Nx.Tensor<
f32[2][2]
[
[-0.8182370662689209, -0.21420666575431824],
[-0.8946113586425781, 0.5302359461784363]
]
>
iex> Nx.random_uniform({2, 2}, 0, 5, type: {:u, 32})
#Nx.Tensor<
u32[2][2]
[
[3, 2],
[1, 2]
]
>
Both Nx.random_uniform
and Nx.random_normal
optionally take 2 additional arguments. For Nx.random_uniform
, these arguments are min
and max
of the random interval [min, max)
. For Nx.random_normal
, these arguments are the mean
and scale
of the distribution.
Nx.random_x
methods prove useful when creating other random-like methods. For example, you can create a random mask using Nx.random_uniform
:
iex> probability = 0.5
iex> Nx.select(Nx.less_equal(Nx.random_uniform({5, 5}), probability),
...> 0,
...> 1)
#Nx.Tensor<
s64[5][5]
[
[1, 0, 0, 0, 0],
[0, 1, 0, 1, 1],
[1, 1, 0, 0, 1],
[1, 1, 1, 1, 0],
[0, 1, 1, 1, 0]
]
>
Templates
Nx
also has a template creation method that defines a template for an expected future value. This is useful for things like ahead-of-time compilation. You can create templates using Nx.template/3
, but you won’t be able to use the resulting tensor anywhere:
iex> t = Nx.template({4, 4, 4}, {:f, 32}, names: [:x, :y, :z])
#Nx.Tensor<
f32[x: 4][y: 4][z: 4]
Nx.TemplateBackend
>
iex> Nx.add(t, 1)
** (RuntimeError) cannot perform operations on a Nx.TemplateBackend tensor
Hopefully this gives you a primer on ways to create tensors in Nx
. If you have any questions or issues, let me know!