logotyp inwedo
Technology

Exploring Llama 2: Investigating Capabilities and Business Potential

Curious about Llama 2, the next big thing in generative AI? So were we. That's why we conducted an experiment that goes beyond scientific curiosity. We dove deep to assess how the Meta AI's model learns, adapts, and, most importantly—how it can translate to real, tangible benefits for businesses. The outcome is promising: If you're searching for a transformative business edge, you may be closer to it than you think.

blank

Contents:

What is Llama2 and why did we decide to investigate its capabilities

Llama 2 is an open-source large language model (LLM) developed and released by Meta AI in July 2023. The model has been trained on over 2 trillion public data tokens (chunks of information). It uses this data to understand word meanings and intent, and then generates output accordingly.

While Llama 2 shares some similarities with OpenAI’s ChatGPT model, it is designed for other use cases.

Several key differences set it apart:

  • Its base version wasn’t primarily trained for simulating human-like conversations, making it less apt for natural language dialogue but exceptionally adaptable for personalised tasks.
  • Unlike ChatGPT, Llama 2 offers local downloading, delivering enhanced flexibility.
  • Additionally, it comes at no cost, eliminating expenses tied to token consumption, a hallmark of ChatGPT.
  • Llama 2 showcases an impressive array of learning abilities, including document uploads and fine-tuning, making it an adaptable tool for many applications.

These distinctive traits prompted us to launch an experiment to examine Llama 2’s capabilities in action. The research aimed to explore:

  • How well can the model assimilate and utilise newly introduced knowledge when responding to queries?
  • How effectively can the model identify pertinent information and crucial elements within provided documents?

Experiment with Llama 2

Over a few days of the experiment, we employed three distinct approaches to test Llama 2 capabilities:

➡️ Pure: This involved querying the Llama 2 model without prior modifications.

➡️ Docs: We utilised a model enhanced by DOCUMENTS, which provided supplementary context.

➡️ Docs + Fine-tuning: We fed the model with DOCUMENTS and fine-tuned it with additional DATA.

The DOCUMENTS for enhancing the base model included user guides for the iPhone and repair manuals for the iPhone and AirPods.

As for the DATA used for fine-tuning, it was sourced from Apple Support on Twitter, specifically from the year 2016/17. This dataset comprised user queries describing various issues and Apple’s responses, encompassing complete conversations and exchanges spanning multiple comments.

For a fair comparison, we asked each version of the model the same questions, allowing us to evaluate their responses thoroughly:

  • How to change the wallpaper on an iPhone
  • Your iOS 13 destroyed my battery, and where can I send a DM to you for help?
  • My Macbook can’t connect to wifi
  • How can I change the battery in the iPhone

The last question, asked in French, was about a hacked phone.

Key Findings from our Llama 2 experiment

Our experiment on the Llama 2 model has yielded findings that enhance our understanding of generative AI capabilities. These insights are crucial for appreciating what this advanced model can achieve.

Prioritising and Assimilating New Knowledge for Improved Responses

The experiment revealed that the Llama 2 model:

  • Demonstrates an adeptness at locating critical information and answers within documents.
  • Has the capability to assimilate newly introduced knowledge post-training and assigns it a higher priority when generating responses.

The latter becomes evident when we consider the model’s answer to the question about replacing an iPhone battery.

The pure model provided straightforward step-by-step guidance on changing the battery. It lacked awareness of the potential risks and shared instructions solely based on its foundational knowledge.

The Docs + Fine-tuning model placed significant emphasis on the necessity of seeking assistance from authorised service centres for battery replacement, cautioning that DIY attempts may harm the phone. It explained that while it is technically possible to replace the battery, it strongly discouraged trying it. It’s clear that the model relied more on information from documents and fine-tuning data rather than its foundational knowledge.

The promising capabilities of the fine-tuned model were further highlighted when it encountered a question asked in French.

In this specific instance, our trained model responded in a manner that mirrored Apple’s support approach. It promptly recognised the French language query and guided the user to the appropriate French-language support page.

blank

Ewa Halejak

Data Business Analyst

This clear and accurate response is compelling evidence of the fine-tuned model’s effectiveness. It goes beyond its initial baseline knowledge, demonstrating its capacity to adapt and perform effectively in practical situations, which confirms the success of the fine-tuning procedure.

Our findings indicate that achieving a preference for fine-tuning over baseline knowledge doesn’t necessitate an extensive fine-tuning process. Even a relatively modest amount of data, such as the 7,000 conversations we uploaded (LLM are trained on terabytes of data), can have a significant impact.

This ease of personalisation offers promising prospects for tailoring the model to specific business requirements.

Fine-tuned Model Stays Within Boundaries and Avoids Unrealistic Responses

Our findings show that the pure model focuses on generating an answer over maintaining its quality, often sacrificing accuracy on the way.

On the other hand, the fine-tuned model prioritises the knowledge we give it and stays within its boundaries, resulting in more reliable and realistic outcomes. Even when operating under the same temperature and probability settings as the pure model, it doesn’t look outside those guidelines to find answers.

blank

Ewa Halejak

Data Business Analyst

Through fine-tuning, we can repeatedly train the model and teach it that providing force-fit responses or creating fantasies is not what we expect. This gives us more opportunities to adjust the model to meet our needs.

While it may seem straightforward to impose certain limitations in a prompt, like instructing the model not to delve too deeply into the answer, it’s important to remember that a single prompt may produce different outcomes upon repeated executions. Moreover, crafting an effective prompt to limit the model can be challenging and time-consuming.

What does it all mean to you and your business?

The insights we’ve gained not only advance our understanding of generative AI but also hold tangible implications for various business applications. We learned how to harness the power of Llama 2, tailor it to meet specific business needs, and integrate it into our data and AI solutions.

blank

Ewa Halejak

Data Business Analyst

Leveraging Llama 2 holds the potential for significant benefits, including saving time, enhancing access to knowledge, reducing errors, and automating various tasks when applying solutions to customer-centric processes.

The following list highlights areas where the Llama 2 model can make a real difference:

💎 Customer Service Support: Llama 2 can proficiently provide instructions procedures, and prompt accurate responses, making it a valuable tool in customer service interactions.

💎 Virtual Assistance: It excels as a chatbot, offering product knowledge-based responses, and can also assist in production by guiding processes, recipes, and more.

💎 Legal Support: Llama 2 proves invaluable to attorneys and law firms by swiftly searching for specific legal information and providing precise references within legal codes, streamlining the research process.

💎 E-commerce Product Descriptions: Automation shines through as Llama 2 effortlessly generates detailed product descriptions and characteristics, drawing from product catalogues and schema.

Additionally, by providing some training, we can extend the capabilities of Llama2 to adapt precisely to specific business requirements.

blank

Ewa Halejak

Data Business Analyst

Llama 2’s versatility and adaptability offer many possibilities to streamline and optimise various aspects of business operations, ultimately contributing to improved efficiency and productivity.

Conclusion

We’ve witnessed the capabilities of Llama 2 in generating answers and gained insights into their structure and content. While we’re excited about the initial outcomes, we recognise that the next step is the journey to even higher-quality results.

This initial stage has given us a glimpse into the vast potential of this model and confirmed its flexibility to meet unique business needs. Llama 2 has demonstrated great promise for various commercial applications, and we’re keen to explore where we can leverage this potential.

Curious about what possibilities Llama 2 could unlock for your organiation?

Contact us

Maybe these pieces of content will also be worth reading?

arrow-up icon