How AI will impact programmers

In November 2023, I was among the millions of humans who were amazed by ChatGPT's breathtaking coding abilities. Since then, I've observed that many programmers have reacted to ChatGPT with a mixture of confusion, excitement, and fear. To better understand the impact of ChatGPT on myself and my fellow programmers, I conducted interviews with a software engineer, top AI researcher, and CS professor. These three interviews were extremely eye-opening and helped to debunk some of the common narratives about ChatGPT that have circulated on Reddit, Twitter, etc.

Disclaimer: Since ChatGPT (and LLMs more broadly) are still in their early days, there will probably be some statements in this article that don't age well. Hence, you should take this post with a grain of salt, and always seek to verify information with multiple sources/anecdotes.

Without further ado, here are several "myths" about LLMs that my research/interviews have debunked (at least for now):

Myth #1: All ChatGPT does is remix text it's been trained on, so it will always be dependent on human coders to improve

Research from the top AI institutions has shown that ChatGPT does not just “remix” text it's been trained on. In fact, researchers have discovered that Large Language Models are able to come up with novel solutions to difficult coding problems that are not found in their training data.

For example, when a group of Google researchers entered one of their AI models into a coding competition with 5,000 human competitors, they observed two really important things:

  1. Number one, they found "no evidence that our model copies core logic from training data"
  2. And number two, their model copied code from its training data at a rate very similar to the amount of code that the human competitors copied from the training data

But how could the human coders in this coding competition possibly know what was in Google's training set? Well, the answer is that the code that both the AI model and human competitors "copied" was "mostly boilerplate code for reading and parsing input data." In other words, both the humans and the AI model leaned heavily on extremely common coding patterns for tedious tasks like reading input data, but were still able to use a degree of creativity to come up with the core logic for solving a problem.

These early signs of creativity open up the possibility that AI might one day be able to write code to improve itself, without intervention from humans.

Myth #2: If you are a software engineer who is worried about being replaced by ChatGPT, then you are probably bad at your job

As of today, Machine Learning models are objectively better at discovering solutions for certain computational tasks, such as image recognition and image generation, than even the best human programmer. However, it does not follow that the best human programmer is bad at their job. In the case of image recognition, Machine Learning has outstripped the abilities of all human programmers - competent and incompetent alike.

In fact, many of today's top human programmers have wholeheartedly embraced ChatGPT as a way of automating a large subset of their work, including Andrej Karpathy (formally the head of AI at Tesla, now an engineer at OpenAI):

Copilot has dramatically accelerated my coding, it's hard to imagine going back to manual coding. Still learning to use it but it already writes ~80% of my code, ~80% accuracy. I don't even really code, I prompt. & edit. - Andrej Karpathy

No programmer can avoid being replaced by AI, so the best programmers will likely be the ones who are quickest to pivot to new AI-based tech stacks to take advantage of their productivity enhancements.

Myth #3: ChatGPT has made technical coding interviews obsolete

While ChatGPT has definitely made it a lot easier to cheat on coding interviews, I don't foresee coding interviews going away anytime soon. This is because, as of today, knowing the general outline for a solution to a coding problem is still a lot faster than not knowing how to solve a problem and asking ChatGPT to come up with a solution from scratch. This is because ChatGPT still makes lots of mistakes and struggles to come up with solutions to hard coding problems (e.g., Leetcode Hard problems). Obviously, as ChatGPT gets better at coding, the nature of technical coding interviews will evolve. But as long as software engineers are around, technical coding interviews are here to stay.

Myth #4: ChatGPT is a chatbot

If I had to choose one important takeaway from my expert interviews, its something Dr. White mentioned towards the end of our interview:

ChatGPT is not a text generation engine. It's not an oracle that answers questions like Google. I think all of those things are missing the mark. It's a completely new computational architecture that has capabilities that we don't fully understand yet. - Dr. Jules White, CS Professor @ Vanderbilt

This sentiment — that ChatGPT (and Large Language Models more broadly) are not "just chatbots," but rather, a whole new computational architecture — is one that is shared by some top AI researchers, including Andrej Karpathy:

With many 🧩 dropping recently, a more complete picture is emerging of LLMs not as a chatbot, but the kernel process of a new Operating System. E.g. today it orchestrates:

  • Input & Output across modalities (text, audio, vision)
  • Code interpreter, ability to write & run programs
  • Browser / internet access
  • Embeddings database for files and internal memory storage & retrieval

A lot of computing concepts carry over. Currently we have single-threaded execution running at ~10Hz (tok/s) and enjoy looking at the assembly-level execution traces stream by. Concepts from computer security carry over, with attacks, defenses and emerging vulnerabilities.

I also like the nearest neighbor analogy of "Operating System" because the industry is starting to shape up similar: Windows, OS X, and Linux > GPT, PaLM, Claude, and Llama/Mistral(?🙂). An OS comes with default apps but has an app store. Most apps can be adapted to multiple platforms.

TLDR looking at LLMs as chatbots is the same as looking at early computers as calculators. We're seeing an emergence of a whole new computing paradigm, and it is very early. - Andrej Karpathy

In Summary

In summary, I believe AI will disrupt software engineering more than any other field. Moreover, Large Language Models like ChatGPT are shaping up to be more than just "chatbots" and are probably the foundational technology for the next generation of computers. In fact, there are many startups that are trying to build consumer devices that are more deeply integrated with Large Language Models, including Humane and Rewind. There are also early signs that OpenAI might be building a consumer device. With all this change happening, I think software engineers will do ourselves a favor by embracing and experimenting with new AI-based tech stacks, even if it's just on our own time.

The release of ChatGPT was a huge turning point in human history, so I think I will always fall short in describing how important it will be for software engineers and non-software engieners alike. Thus, I think it's best that I end this article with a quote from someone who is much better at putting words on a page than I am:

Nothing else in the world...not all the so powerful as an idea whose time has come. -Victor Hugo