Skip to content Skip to sidebar Skip to footer

Converting Short Tensorflow 1.13 Script Into Tensorflow 2.0

I am trying to learn the dynamics of tensorflow2.0 by converting my tensorflow1.13 script (below) into a tensorflow2.0 script. However I am struggling to do this. I think the main

Solution 1:

Instead of using the conversion tool (that exists, but I don't like it since it just prefixes (more or less) the API calls with tf.compat.v1 and uses the old Tensoflow 1.x API) I help you convert your code to the new version.

Sessions are disappeared, and so are the placeholders. The reason? The code is executed line by line - that is the Tensorflow eager mode.

To train a model you correctly have to use an optimizer. If you want to use the minimize method, in Tensorflowe 2.0 you have to define the function to minimize (the loss) as a Python callable.

# This is your "model"
theta = tf.Variable(np.zeros(100))
p_s = tf.nn.softmax(theta)

# Define the optimizer
optimizer = tf.keras.optimizers.Adam()

# Define the training loop with the loss inside (because we use the
# .minimnize method that requires a callable with no arguments)

trainable_variables = [theta]

for epoch in range(10):
    for datum in sample_data():
        # The loss must be callable and return the value to minimize
        def loss_fn():
            loss = tf.reduce_mean(-tf.math.log(tf.gather(p_s, datum)))
            return loss
        optimizer.minimize(loss_fn, var_list=trainable_variables)
    tf.print("epoch ", epoch, " finished. ps: ", p_s)

Disclaimer: I haven't tested the code - but it should work (or at least give you an idea on how to implement what you're trying to achieve in TF 2)


Post a Comment for "Converting Short Tensorflow 1.13 Script Into Tensorflow 2.0"