In the previous two posts of this series, I shared my experiences with using ChatGPT to create a coding project, which involved some ups and downs. It took 78 prompts and a total of 350 lines of prompts to create a 118-line Typescript project. Moreover, the process took four times longer than coding it by hand. Nevertheless, this was only the beginning of my exploration of the potential of Large Language Models (LLMs) in programming.
Over the past few months, I’ve observed that the capabilities of LLMs have significantly improved. When comparing the results from GPT-3.5 Turbo with my earlier experiences, the quality of the generated code has noticeably increased. With the release of GPT-4, the improvements are even more remarkable. It’s evident that the pace of development in LLMs is rapid, and it remains uncertain if there will be a natural barrier to these improvements, as we have seen in the past with other general AI approaches.
It’s essential to recognize the fundamental differences between code generation using AI and traditional model-driven software engineering (MDSE). MDSE relies on meta-models and domain-specific languages (DSLs) to create deterministic code output. The process involves specifying exactly what you want, then using templating or other deterministic methods to generate the desired code.
AI, on the other hand, works as a statistical model, which means the results are not always consistent. The quality of the output might vary between two calls of an AI, even with the same input. Additionally, there could be potential licensing issues when using LLMs for code generation, as GPL-licensed code might be part of the training data.
Despite these concerns, the progress in LLMs opens up new possibilities in the programming world. As these models continue to advance, we can expect them to become more reliable, accurate, and efficient. The rapid pace of development also indicates that LLMs may soon become an indispensable tool for developers, helping them quickly prototype and create complex applications.
In conclusion, the journey of using LLMs for programming has been an exciting one, albeit with some challenges. While my initial experiences were not as smooth as I had hoped, the rapid advancements in LLMs show promise for the future. As developers, we must remain vigilant about the potential ethical and licensing concerns while harnessing the power of LLMs to create better software solutions.
Incorporating classical programming, few-shot learning, and fine-tuning within tools like Auto-GPT opens up a world of exciting possibilities for developers. By leveraging the strengths of classical programming, we can create robust foundations for our applications, while few-shot learning enables LLMs to quickly adapt to new tasks with minimal examples. This combination allows developers to seamlessly integrate AI-generated code with their existing codebases, taking advantage of the efficiency and creativity LLMs provide. Moreover, fine-tuning enhances the performance of programs like Auto-GPT by training less complex models on specific domains or tasks, resulting in more accurate and context-aware code generation. This hybrid approach, which marries classical programming techniques with the power of cutting-edge AI, has the potential to revolutionize software development, making it faster, more efficient, and adaptable to the ever-changing needs of users and industries.
The future of programming with AI can be bright, and we are only witnessing the beginning of this transformative era.