Use ChatGPT for Debugging | by Dmytro Nikolaiev


Enhance your debugging experience and learn faster with the power of Large Language Models

Photo by Pavel Danilyuk on Pexels

It’s hard to deny that Large Language Models (LLMs) are making a profound impact on various industries and applications, revolutionizing the way we work and interact. Even though the initial hype around ChatGPT has calmed down since its release about six months ago (in November 2022), its influence remains significant. It seems that autoregressive LLMs will continue to be a part of our lives in the near future, and it is worth developing skills to interact with them, both as a developer and a user.

As Chip Huyen stated in her blog post, it is relatively easy to achieve something impressive with LLMs, but it is quite challenging to build something production-ready considering the limitations and potential issues that LLMs currently have. However, while the research and engineering community is actively working to address these challenges, it is worth acknowledging the fact that individuals can already benefit greatly from LLMs, at least using them as personal assistants for everyday non-critical tasks or as collaborators for brainstorming.

In my previous article, I discussed the best practices of prompt engineering, providing insights to help you develop local LLMs-based applications. In this post, I will share a set of techniques that enable you to utilize models such as ChatGPT for effective code debugging and accelerated learning of programming. We will also take a look at the example prompts for writing and explaining code. These techniques will be valuable not only when interacting with ChatGPT but also when seeking assistance from your colleagues or even tackling programming challenges independently.

This article is primarily targeted toward beginners, so I tried to provide illustrative examples and explanations. I hope these techniques will assist you in understanding and troubleshooting code more efficiently.

In fact, ChatGPT has not made significant changes to the debugging process. The great thing is, now you can easily connect with a virtual colleague without worrying about being a bother or feeling hesitant to ask stupid questions! But the techniques that we will consider exist as long as software engineering exists, and therefore will be useful not only when interacting with LLMs, but also for a better understanding of the process and more effective interaction with coworkers.

To find a bug in your code, you only need two essential steps (there’s three actually):

  1. Isolate the bug and demonstrate it with the minimum amount of code;
  2. Make an assumption about your error and test it;
  3. Iterate with more assumptions until you find a fix.

While you can start using ChatGPT right away, it’s actually a better idea to begin by reproducing the error for a few reasons. First of all, it might be challenging to include all the related points and explain exactly what you’re trying to achieve within the context of the language model. Secondly, it will allow you to gain a better understanding of the issue and possibly find the error yourself. Let’s see.

By the way, in this post I am using the vanilla version of ChatGPT (GPT-3.5), but for coding tasks, GPT-4 is typically more proficient.

Step 1: Isolate and Reproduce the Problem with the Minimum Amount of Code

The first step is to reproduce the problem. As we know, the majority of issues can still be resolved with the classic “turn it off and on again”. It’s possible that you might have become tangled up with the code execution order in Jupyter Notebook.

If possible (and it typically is), it’s recommended to write new code that throws the same error and keep it as simple as possible.

Let’s consider the example of a TypeError: ‘int' object is not iterable, which occurs when you try to iterate over some_integer instead of using the range(some_integer) construct.

Bad example: a function calls another function that then invokes a method of a class. At first glance, it may require some time to determine where the actual computation occurs, despite this being a relatively simple example. Similarly, for models, it becomes more challenging to locate the relevant information among unrelated details.

Better example: get rid of the class by moving the functionality of do_some_work() function (which is causing the error) directly into the function we are calling.

Besides the fact that we still do a terrible job with the variable naming conventions (remember, variable names should be descriptive and meaningful!), this code is still easier to debug and understand.

Even better example: we can get rid of some_function() as well.

Overall, we have shortened the code by more than half. Compare how much easier it is to find a bug in it.



Source link

Leave a Comment