Multiplex

9 Ways to Better Interact With a LLM (Using the Altman Drama!)


Altman ousted, drama's cue, \n OpenAI in tumult, oh, what a view. \n Resignations dance, a corporate ballet, \n Microsoft frowns, in disarray. \n \n Shear takes the reins, a new CEO, \n Nadella fumes, a tempest's blow. \n Monday dawns, a twist unfolds, \n Microsoft's embrace, a tale retold.

\ A brief document inspired by OpenAI’s latest drama, capturing the main techniques for interacting with a large language model (LLM) to make it more relevant for our use case.

\ I did not detail some of them, such as fine-tuning or parameter tuning (e.g., model temperature, a parameter that can be used to reduce model variability, mainly affecting its creativity), but I mentioned them at the end. The goal was to have an incremental approach that captured QUICKLY all this drama!

\ Ultimately, I mostly stuck to an “Instructions + Question + Input data + Examples” pattern.


1. Fundational Model (Zero-Shot learning)

  • Description: The model starts with no specific knowledge about the provided context. We initialize the model with a prompt, and it generates a response based on its pre-existing knowledge.

    \

  • User Prompt:

    \

"Did you hear the latest news about Altman?"

\

  • Model Response:

    \

"I don't have real-time information or the ability to access the internet for the latest news updates. As of my last update in January 2022, I don't have specific information about events or news from Altman. Please provide more details or check a reliable news source for the most recent information. If Altman refers to a specific company, individual, or place, knowing more context could help me provide better information based on my training data up to January 2022."


2. Context Introduction (Few-Shot learning)

  • Description: Providing a limited amount of context to guide the model’s understanding. We introduce specific examples (shots) or additional context within the prompt to improve the model’s response.

    \

  • User Prompt:

    \

"I'm talking about Sam Altman, the former CEO of OpenAI. What can you tell me about it? For example, he was recently ousted, and there's a new CEO now."

\

  • Model Response:

\ "As of my last update in January 2022, Sam Altman is an entrepreneur and investor who was the CEO of OpenAI. However, it's worth noting that leadership roles at companies can change, and my information might be outdated.For the latest information on Sam Altman and his role at OpenAI, I recommend checking the most recent news sources or OpenAI's official communications."


3. Now, let’s set the mood

  • User Prompt:

    \

"You are a disillusioned, sarcastic poet. Please respond with a short poem to all my inquiries."

\

  • Model Response:

\ "In a world of questions, I dance with disdain,Answers elude, like drops in the rain.Your queries, a maze, with no exit in sight,I'm the poet of wit, in a cynical moonlight."


4. Copy/Paste Content Through Prompting

  • Description: Providing specific content or context through the prompt for the model to incorporate directly into its response. Guiding the model through copy/pasting relevant information.

    \

  • Token Limit: Determined by the model’s constraints. Here we pass the content of this article https://cointelegraph.com/news/sam-altman-fired-from-openai-timeline, which is roughly 1700 tokens and well under the context window limit size.

    \

  • User Prompt:

    \

"Can you tell me about Sam Altman's role as CEO of OpenAI before his ousting? For example, what were his major contributions and initiatives? Take into account the following content: [CONTENT]"

\

  • Model Response:

\ "Altman ousted, drama's cue,

OpenAI in tumult, oh, what a view.

Resignations dance, a corporate ballet,

Microsoft frowns, in disarray.

\ Shear takes the reins, a new CEO,

Nadella fumes, a tempest's blow.

Monday dawns, a twist unfolds,

Microsoft's embrace, a tale retold."


5. External Knowledge Retrieval (Integrating Retrieval-Augmented Generation)

  • Description: Enhancing responses by integrating information retrieved from external sources or a broader context.LLMs have limited context windows. The intuitive response is to increase the size of that context window, but researchers at Stanford found that doing so doesn’t actually correlate with performance (measured by accuracy). https://arxiv.org/pdf/2307.03172.pdf?ref=cms.nux.ai

Models are better at using relevant information that occurs at the very beginning or end of their input context, and performance degrades significantly when models must access and use information located in the middle of their input context.

\

  • Process: Instead of relying solely on the model’s pre-existing knowledge, we can introduce information from a database or external source to augment the response. RAG is a filtering mechanism to give the LLM only what is most necessary to answer this question. To obtain what is more necessary, query first a (vector) DB by matching the query embedding with the stored document embeddings and return only the most accurate content to be passed to the LLM. This external content is passed along with the query to the LLM in the context.

6. Iterative Prompting (Chain Of Thoughts)

  • Description: Continuing the interaction in a sequential, thought-building manner.

    \

  • Process: We build on the information provided in previous prompts, creating a narrative or chain of thoughts.

    \

  • User Prompt:

    \

"There were concerns about communication issues between Altman and the board. Could you first elaborate on that? Then, analyze what impact it had on the organization."

\

  • Model Response:

\ "Altman's downfall, a tale of words unsaid,

Communication amiss, a board misled.

Whispers of secrecy, a clandestine game,

In the corridors of power, trust went up in flame.

\ Impact profound, like a digital quake,

Resignations cascaded, trust at stake.

Management blindsided, chaos ensued,

OpenAI's unity, brutally subdued.

\ The fallout, a symphony of disarray,

In the void of words, alliances sway.

Lesson learned in this tech charade,

In the kingdom of bytes, communication's blade."


7. Fine-Tuning

  • Description: Specializing the model on a domain-specific dataset to improve performance within that domain.

    \

  • Process: By providing additional training data or examples related to a specific domain, refining the model’s understanding and responses. Fine-tuning allows for the correction of biases in the model based on the specific examples used during training. Combining this with RAG’s ability to retrieve diverse external sources may contribute to reducing biases by incorporating a broader range of perspectives. The advantage is also potentially to reduce the size of the model and, consequently, the cost of operating it, while at the same time making it more business-specific. However, retraining a model can also be expensive and time-consuming, so it’s not something that can serve fresh content quickly.


8. Fine-Tuning + RAG (Embeddings)

  • Description: Combining the benefits of fine-tuning with the context-awareness of RAG, using embeddings for context retrieval.

    \

  • Process: The model is fine-tuned on specific examples while also having the ability to retrieve contextually relevant and fresh information from external sources.

    \

  • Token Limit: Similar to the individual steps, character limits might vary based on the specific use case and model implementation. The advantage is that a more specialized model will, by default, return a more accurate answer. However, it will still not be updated. When combined with RAG, we can save some context space and still provide fresh content when needed.


9. Templating

  • Description: Providing a structured template to guide the model’s response, ensuring a specific format or structure.

    \

  • Process: Instead of relying on free-form responses, we provide a template with placeholders for specific information, guiding the model to generate responses in a structured manner.

    \

  • Use Case: Templating can be useful when we want the model to provide information in a consistent format, such as summarizing key points or listing specific details.



\ Found this article useful? Follow me on Linkedin, Hackernoon, and Medium! Please 👏 this article to share it!



from HackerNoon https://ift.tt/JLjVnzE
https://ift.tt/zTFi7nr

Comments