How to work
Our AI oracle in Copenhagen, Alex, can be booked Saturday from 9-14 for a 15-minute slot. Just click the link and book your preferred time. First come, first served.
Booking link: https://calendar.app.google/vRxZdB9xLnfoRoZYA
4 Elements of Output in AI Solutions
1. Large Language Models (LLM)
Foundation: The LLM serves as the core of the AI solution.
Capabilities and Limitations: Each model has unique strengths and limitations, affecting
factors like reasoning, creativity, and factual accuracy.
Token Limits: Models like GPT-4 (128k tokens per chat) and Claude (200k tokens per
chat) have different token capacities. Tokens generally represent chunks of text, such as
words or characters, which limits the amount of information the model can process at
once.
Model Selection: Choosing the right model for the task is essential, as different models
excel in specific types of output, such as creative writing vs. factual analysis.
2. External Sources
Purpose: Attached files expand the model's knowledge and align it with specific subject
fields, helping control the data the model bases its responses on.
Typical Attachments: Common sources include court cases, legal articles, firm
templates, book chapters, examples of previous work, and inspirational materials.
Quality and Relevance: High-quality, relevant attachments improve the accuracy and
specificity of the output.
Privacy and Security: Out Solution is GDPR safe
Source Validation: Check the accuracy and credibility of attached sources to ensure
reliable responses.
Attachment Formatting: Ensure files are formatted in a way the model can process (e.g.,
clear text, standard file types), as formatting issues may impact comprehension. Our
model does not read pictures/video/audio today.
Scope Limitation: Be mindful of the scope of attached materials to avoid overwhelming
the model with too much context, which may dilute focus.
3. System Prompt
Often referred to as the model’s “personality,” the system prompt controls how the
model presents its knowledge.
Customizability: The system prompt can be adjusted to fit specific use cases or domains,
making outputs more consistent and aligned with desired tones or styles.
Flexibility: System prompts can range from simple to advanced
Benefit: A tailored system prompt increases the likelihood of stable and predictable
outputs.
Consistency Across Tasks: Maintaining a consistent system prompt for similar tasks can
lead to more reliable and uniform results across interactions.
Experimentation: Testing different variations of the system prompt can help identify
which configurations yield the best results for specific needs.
4. User Prompt
Function: The user prompt specifies the task given to the model. Its complexity and
creativity are limited only by the user’s imagination.
Prompting Techniques:
- One-shot Prompting: Providing a single example to guide the model.
- Series Prompting: Using a sequence of prompts to guide the model through multi-step
tasks.
Optimization Tips:
- Use frameworks like COSTAR as a foundation for structuring prompts.
- Ask the model for feedback on how to improve prompts and clarify why the output
may not meet expectations.
Clear and Concise Language: Use straightforward language to help the model
understand complex tasks without ambiguity.
Layered Instructions: For complex tasks, break down instructions into clear steps or
phases to improve comprehension and output quality.
Iterative Feedback Loop: Regularly update the prompt based on model performance
and output feedback for continuous improvement.
GOOD LUCK!