In this article

    As the hype around deep learning gains momentum, many frameworks and libraries emerge around it. I get the chance to use some of them in practice and want to share my opinion and observations on what works well and what you should be aware of while working with these frameworks. One step at a time, let’s start with TensorFlow.

    TensorFlow is one (and often referred to as the best) of many frameworks that are used when working with neural networks. It is Google’s open-source library for numerical computation, which also offers a set of tools for designing, training and fine-tuning neural networks.

    At NeuroSYS we used it for problems concerning computer vision like image classification or generating artificial images, as in the case of creating a real-time object detection or a bacterial classification system.

    But enough of broad descriptions, let’s jump straight to the point.

    TensorFlow Pros and Cons

    Pros:

    #1 Great data vizualization tool – TensorBoard

    TensorBoard is a suite of visualization tools in the TensorFlow library that helps to understand, debug, and optimize neural networks easier. It lets you present neural network graphs, input, output, training progress and any additional information in a clean readable way using only a few lines of code.

    tensorflow tensorboard graph visualization
    TensorBoard graph visualization

    Here is how you can easily visualize data in TensorBoard:

    # define from which variables generate summary from
    loss_summary = tf.summary.scalar('loss_1', loss1)
    image_summary = tf.summary.image('generated_image', result)
    # merge whole summary into one instruction
    summary = tf.summary.merge([loss_summary, image_summary])
    
    # define summary writer
    summary_writer = tf.summary.FileWriter('path/to/summary/',
                                           graph=tf.Session().graph)
    # run summary along computation
    result, summary_values = tf.Session().run([network_output, summary],
                                              feed_dict={: input_data})
    #write summary to disk and view it in TensorBoard
    summary_writer.add_summary(summary_values)

    #2 Easy production-ready model sharing

    With TensorFlow one can easily share a trained model. This can sound like a must-have feature, but it’s still not a standard across different frameworks.

    Many frameworks require you to provide a full code of a model in order to load its weights into. TensorFlow, on the other hand, requires only a checkpoint file and the knowledge of layers names you need for inference (the input layer is the most important here because without it we cannot run a computation graph).

    This feature leads us to pro #3, as it makes the TensorFlow model useful for a broad spectrum of applications. It allows you to implement the same model (without rewriting or recompiling it) in various projects no matter what language they are written in.

    It’s still not a one-liner, but at least you don’t need to define the whole model:

    
    # imported graph to be used as default later
    imported_graph = tf.Graph()
    with imported_graph.as_default():
        # read graph definition from file
        graph_def = tf.GraphDef()
        with tf.gfile.GFile('path/to/model', 'rb') as model:
            # parse it
            graph_def.ParseFromString(model.read())
        # and import to TensorFlow
        tf.import_graph_def(graph_def, name="imported_model")
    
    with imported_graph.as_default():
        output = tf.Session().run("output:0", feed_dict={"input:0": our_input})
    

    #3 Multiple language support

    TensorFlow is designed to support multiple client languages. It officially supports Python, C++, JavaScript, Go, Java and Swift. Although only Python, as the most commonly used one, supports all available features.

    Due to high popularity, TensorFlow community created bindings to use the framework in other languages, such as C# (which we have used and I can say that was pretty good) and Ruby.

    This ensures portability and allows developers to use machine learning models for desktop, mobile and even web applications.

    Cons:

    #1 Too cluttered code

    This is something that TensorFlow developers are working on, and already announced to prevent in 2.0 release. Sadly, the current state of the framework is inconsistent:

    Which one to use?

    • tf.nn.conv2d
    • tf.nn.convolution
    • tf.layers.conv2d
    • tf.layers.Conv2d
    • tf.contrib.layers.conv2d
    • tf.contrib.layers.convolution2d

    Even typing “tf conv2d” in google leads to 3 of these options making it really frustrating when you want to just find out which operation to use.

    #2 Need for extra code

    As you can see in examples above, the amount of code needed for adding functionality is not that big. Nevertheless, naming conventions may be inconsistent, and the complexity of modules can be overwhelming.

    Every computation needs to be called from a session handler. This creates a situation that using TensorFlow is like using a language within another language. One can forget about writing a clean pythonic code when even something as simple as a for-loop needs to be called using a TensorFlow equivalent.

    Sometimes the documentation even “forgets” to tell you that you need to include additional instructions in your code to make some things work. This happened to me when I was trying to write my own data loader using the TensorFlow pipeline and provide multiple workers to parallelize computation.

    However, what was not included in the documentation, is that you need to manually launch these workers. Without this part, the whole script just halted waiting for workers to provide data. With no error or warning, it just stops and waits for data from workers which were never launched.

    This is an example of how those modules aren’t always seamlessly connected with each other, which leads to the lack of communication between them, and eventually, situations like the one described above.

    tensorflow consultation banner
    1 hour free consultation
    Have something specific in mind? Don’t hesitate to contact us for an initial conversation!
    Learn more

    #3 Frequent releases

    For someone, this might sound like an advantage. But in reality, new releases every 1-2 months are better to avoid in the production environment, especially when they tend to break backward compatibility.

    We find it especially harmful for using bindings in different languages like C# TensorSharp. My team once encountered a problem after new arguments were introduced in one of the commonly used function, which broke compatibility with TensorSharp. The easiest solution we found was to use the older version of TensorFlow to re-export the whole model.

    We understand that some changes in the rapidly developed framework are inevitable, but perhaps the community would benefit more if releases were less frequent, but more attention was paid to the consistency of the framework.

    Few tips for easier work with TensorFlow

    Tip #1

    For better performance avoid running session for predicting only one result. Computing many operations one by one is gonna take more time than doing one session run with gathering all these operations at once. This is possible due to the TensorFlow’s ability to parallelize computation, as well as avoiding the overhead created by initializing a computation session multiple times.

    Tip #2

    Avoid clutter in your code! This is rather general advice, but it’s really essential when working with TensorFlow. Moving a definition of a network to another file is a good idea. This way you can easily modify it later with no need to search through large files.

    To Sum Up

    Despite all the cons, TensorFlow is one of the most widely used frameworks for deep learning projects, adopted by such giants as IBM, Twitter, 9GAG, Ocado… and (surprise-surprise!) Google. It stays in my top list also, although I truly wish those flaws are fixed one day (and the sooner the better).

    At the same time, I think TensorFlow can be a bit overwhelming for the beginners, as it often provides too many implementation options. This might be confusing for those with no experience or basic understanding of differences among suggested implementations. If this scares you off, it’s better to opt for simpler alternatives.

    So this is it. I hope you find something useful in this blog post. Stay tuned for further updates on deep learning frameworks. And don’t hesitate to drop me a line if you have any questions – t.bonus@neurosys.com.