1. Setup
TensorFlow can be configured to send data to log files using the SummaryWriter
object.
First initialize the SummaryWriter
# create log writer object
writer = tf.train.SummaryWriter(logs_path, graph=tf.get_default_graph())
and then write to Summary logs at each epoch
# write log
writer.add_summary(summary, epoch * batch_count + i)\
Full Code Example:
Code sample borrowed from this wonderful collection of TensorFlow tutorials: https://ischlag.github.io/2016/06/04/how-to-use-tensorboard/
import tensorflow as tf
# reset everything to rerun in jupyter
tf.reset_default_graph()
# config
batch_size = 100
learning_rate = 0.5
training_epochs = 5
logs_path = "/tmp/mnist/2"
# load mnist data set
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
# input images
with tf.name_scope('input'):
# None -> batch size can be any size, 784 -> flattened mnist image
x = tf.placeholder(tf.float32, shape=[None, 784], name="x-input")
# target 10 output classes
y_ = tf.placeholder(tf.float32, shape=[None, 10], name="y-input")
# model parameters will change during training so we use tf.Variable
with tf.name_scope("weights"):
W = tf.Variable(tf.zeros([784, 10]))
# bias
with tf.name_scope("biases"):
b = tf.Variable(tf.zeros([10]))
# implement model
with tf.name_scope("softmax"):
# y is our prediction
y = tf.nn.softmax(tf.matmul(x,W) + b)
# specify cost function
with tf.name_scope('cross_entropy'):
# this is our cost
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
# specify optimizer
with tf.name_scope('train'):
# optimizer is an "operation" which we can execute in a session
train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cross_entropy)
with tf.name_scope('Accuracy'):
# Accuracy
correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
# create a summary for our cost and accuracy
tf.scalar_summary("cost", cross_entropy)
tf.scalar_summary("accuracy", accuracy)
# merge all summaries into a single "operation" which we can execute in a session
summary_op = tf.merge_all_summaries()
with tf.Session() as sess:
# variables need to be initialized before we can use them
sess.run(tf.initialize_all_variables())
# create log writer object
writer = tf.train.SummaryWriter(logs_path, graph=tf.get_default_graph())
# perform training cycles
for epoch in range(training_epochs):
# number of batches in one epoch
batch_count = int(mnist.train.num_examples/batch_size)
for i in range(batch_count):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# perform the operations we defined earlier on batch
_, summary = sess.run([train_op, summary_op], feed_dict={x: batch_x, y_: batch_y})
# write log
writer.add_summary(summary, epoch * batch_count + i)
if epoch % 5 == 0:
print "Epoch: ", epoch
print "Accuracy: ", accuracy.eval(feed_dict={x: mnist.test.images, y_: mnist.test.labels})
print "done"
2. Run TensorBoard server
TensorBoard runs an HTTP server. Start the HTTP server using the following command:
tensorboard --logdir=run1:/tmp/mnist/ --port 6006
Notice that we are pointing the tensorboard server to the log directory that we used in our code example above. Also note the port 6006.
Note: you can add an additional PTY session by hitting the plus button at the bottom of the interface. This will allow you to run two windows in parallel. Other popular programs such as screen
and tmux
are also great to use.
3. Run the program
In a new window/process run the example program using python main.py
TensorFlow will initialize and write its data to the log files.
4. View TensorBoard
You can find your public IP address in the Paperspace Terminal (see screenshot below)
Once you have your public IP address and port that we used to setup the TensorBoard serve (in our case port 6006
then you can open any web browser and voila.