Artemis AI: Combining the Power of LLMs and Evolutionary Optimisation for High-quality and High-performance Software Code — TurinTech AI

TurinTech AI
8 min readMar 7, 2024

Programming: More Than Just Language

Since 2023, there’s been a real buzz around GenAI coding tools like GitHub Copilot, which quickly attracted a large user base of over 400,000 people within just a month! This trend marks a significant change in how we think about writing computer programs, as discussed in our previous blog about GenAI’s role in software development.

For those worried that AI might take over programming jobs, there’s good news. The toughest part of software creation isn’t the coding, but the design of software architecture, which is still determined by humans. Thus, coding isn’t just about knowing a language. It’s much more than that — it’s a mix of language, logical thinking, and creativity for problem-solving.

Most GenAI code generation tools utilise LLMs based on the transformer architecture. This architecture is excellent for recognising language patterns but it has notable limitations, such as a limited understanding of your business context and enterprise system, restricted capabilities in performance and security, and reliance on the quality of its training data, which is often sourced from public code repositories like GitHub and StackOverflow.

Consider, for example, a common software application like a photo editing app. AI tools might manage to write code for basic functions such as cropping, but they struggle with more complex tasks. These include designing an app that’s efficient across various devices or introducing novel features, tasks that require deep logical thinking and innovative problem-solving. This is why LLMs alone cannot solve the multifaceted and nuanced task of programming.

Coding, Fast and Slow

In his groundbreaking book “ Thinking, Fast and Slow,” Nobel-prize-winning psychologist Daniel Kahneman introduces two distinct yet interconnected modes of thought: “System 1” is fast, instinctive, and emotional; “System 2” is slower, more deliberative, and logical. This dichotomy provides an insightful framework for understanding different approaches in AI-powered coding.

Coding Fast: The Pitfalls of LLM-based Code Generation

I’m not sure if this is your experience as well, but the recommendations I get from Copilot and scripts from GPT are often wrong. Sometimes I see inefficient solutions, code that has bizarre logic errors, or code that seems right but just doesn’t work.
-Vickie Li, Engineer at Instacart

The “Coding Fast” approach mirrors “System 1.” Here, tools like GitHub Copilot exemplify rapid, intuitive code generation. They quickly churn out code based on learned patterns, embodying the fast, instinctual responses of LLMs. However, this rapidity can sometimes lead to bad code and gaps in understanding complex, unique project needs, much like the hasty decisions in human cognition.

Figure 1: Challenges of Using LLMs for Coding

Coding Slow: The Necessity for Deliberate Logic and Quality Assurance

In contrast, “Coding Slow” mirrors “System 2” thinking. It’s a methodical, thoughtful approach to programming. This process is about taking the time to logically structure code, ensuring quality, reliability, performance and scalability. It involves foreseeing how different parts of the code will interact, predicting potential issues, and creatively troubleshooting them. This deliberate approach allows developers to weave in logic and creativity, leading to high-performance code that LLMs cannot yet generate.

In essence, “Coding Slow” is about understanding the project’s broader context and long-term implications, ensuring every line of code serves a purpose and contributes to the overall performance of the application.

Figure 2: Fast Coding vs Slow Coding

An Evolutionary System for Optimal Code

Is it possible to integrate the speed of ‘fast coding’ with the meticulousness of ‘slow coding’ to create high-quality, high-performance code?

We think, yes. This synergy is best understood when we consider optimising code as an evolutionary process, similar to natural selection in biology. This is exemplified in TurinTech’s “ Darwinian Data Structure Selection “ paper, which discusses using evolutionary algorithms to boost application performance and minimise runtime, memory and CPU usage.

In this context, fast-generated code is like the initial gene pool, providing the foundational population that undergoes evolution. Over time, this code evolves, adapting to become better quality and tailored to its intended production environment and business goals. It’s a continuous cycle of adaptation and refinement, where each iteration brings us closer to peak performance.

Figure 3: Evolutionary Optimisation -Survival of the Fittest

LLMs Advance Rapidly, Is an Evolutionary System Still Necessary?

Leading AI companies like OpenAI and Google are unveiling new AI models and product features at an unprecedented pace. This raises the question: Is it still necessary to combine LLMs with an evolutionary system?

In a recent article, “ The Shift from Models to Compound AI Systems,” Matei Zaharia, CTO and co-founder of Databricks, outlined four key benefits of using compound AI systems over relying solely on LLMs.

Let’s consider this within the context of software engineering:

  • Business Objectives Differ
    Each LLM has a set quality level and cost. However, to meet the wide range of application demands, businesses require the right mix of models to achieve an optimal balance of quality, cost, performance, and other metrics tailored to their specific needs.
  • Dynamic Systems
    Enterprise systems are inherently dynamic. LLMs, with their fixed knowledge derived from training data, lack the latest insights on in-house source code, proprietary libraries, tools, hardware and more.
  • Cost Efficiency
    Some programming tasks are more economically improved through iterative evolution than by scaling LLMs.
  • Enhanced Control, Safety, and Trust
    For instance, integrating LLMs with retrieval mechanisms can ensure code outputs are supported by credible references or verified data, enhancing trustworthiness.
Figure 4: Each LLM Has a Set Quality Level and Cost

Artemis AI: Combining the Power of LLMs and Evolutionary Optimisation for High-quality and High-performance Software Code

We’ve explored the idea of evolutionary coding for peak performance. But how does this translate into real-world applications? At TurinTech AI, backed by a decade of research in evolutionary optimisation, we’ve created Artemis AI, our GenAI developer platform for peak performance. This platform enables developers to flexibly utilise LLMs (including our proprietary code performance LLMs), coupled with evolutionary optimisation and robust testing, to generate high-quality, high-performance software code.

Figure 5: Artemis AI Combines LLMs with Evolutionary Optimisation

1. Import code projects
First, developers have the flexibility to either import their code project directly from Git repositories (e.g., GitHub) or upload their project files to Artemis AI. It supports all major programming languages, including C++, Java, and Python.

2. Code Analysis
Developers can use Artemis Assistant to easily chat and interact with their code and gain a better understanding of the pieces of code that can be changed. For example, they can ask Artemis to pinpoint code segments for refactoring or discover opportunities to increase speed and security.

3. LLMs Recommendations
Next, Artemis AI utilises our proprietary LLMs specially finetuned for performance optimisation, or other LLMs (such as GPT-4, Gemini, Claude) of your choice, to rapidly generate new code versions. Users can choose multiple LLMs at the same time.

To make it easier to choose the right LLMs for your programming tasks, Artemis AI will show you the detailed metrics of each LLM, such as costs and inference time, and recommend the optimal combination of LLMs leveraging the Artemis AI Adaptive Learning Engine.

4. Validation Checks
Artemis AI conducts rigorous validation checks on code versions generated by LLMs in step 3. This includes compilation tests to ensure functionality, unit tests for detailed accuracy, security assessments for safeguarding, and IP reviews to avoid copyright issues.

5. Optimisation: Evolving LLMs’ Recommendations
Developers can specify their performance criteria (e.g., runtime) and provide additional context by chatting through the Artemis Assistant. Artemis AI then evolves validated code versions based on these custom criteria. The best versions are re-added to the existing pool (survival of the fittest) to generate new generations, creating a self-enhancing loop. Through continuous cycles of generation and evaluation, the latest code versions are getting close to the most optimal version.

6. Export Final code
Artemis AI not only provides the final optimised code but also delivers it with a comprehensive report on quality, security and performance, along with explanations of the changes made by LLMs. Developers receive ready-to-deploy, high-quality code, streamlining their development process with confidence. Additionally, developers have the option to directly download their code or to automatically generate a pull request in their Git repository, facilitating seamless integration into their existing projects.

Last but not least, Artemis AI can securely connect to your codebases and enterprise systems through on-prem deployment. Every time you run a project with Artemis AI, it gathers insights-your preferences, how different LLMs perform on your unique tasks, what code versions were accepted-and feeds this into its Adaptive Learning Engine. This process allows Artemis AI to continuously enhance its ability to deliver higher-quality code more swiftly for your future projects.

Navigating the rapidly evolving AI landscape to identify optimal practices for developing an evolutionary system for software engineering is challenging and resource-intensive. That’s why we’ve built Artemis AI: to enable you and your team to focus on creating value and stay ahead in the AI game, instead of building and maintaining tools, and to build a more productive and happier development team.

Unlock the Full Potential of your Code with Artemis AI

As discussed earlier, developing high-quality, high-performance software for intricate business scenarios is a massive challenge-one that often extends beyond the capabilities of LLMs alone. Artemis AI steps in, offering a powerful platform designed for enterprise-level quality, security, and performance. It empowers developers with a flexible framework that not only leverages LLMs and evolutionary optimisation but also integrates the critical thinking of human developers. This ensures a holistic approach to software development, enabling organisations to maximise AI impact with improved control and trust.

About the Author

Wanying Fang ​| TurinTech AI Marketing

Originally published at https://www.turintech.ai on March 7, 2024.

--

--