d
WE ARE LINGO
2nd floor, Garuda BHIVE, BMTC Complex,
Old Madiwala, Kuvempu Nagar,
Stage 2, BTM Layout, Bengaluru,
Karnataka – 560068. India.
Shakti

A Breakthrough in On-Device AI

SHAKTI MODEL - SERIES

Medical devices

01

Smart Wearables

02

Consumer Electronics

03

Switches & Routers

04

Drone with Edge AI

05

Transformer Architectures, Small Language Models, and On-Device AI

The Transformer architecture revolutionized Natural Language Processing (NLP) by leveraging the self-attention mechanism, which allowed for parallel computation and greater scalability. This innovation paved the way for Large Language Models (LLMs) to achieve state-of-the-art performance across tasks such as text generation, translation, and question answering.

Discover the groundbreaking innovations behind our patented technologies.

Architecture of Shakti-LLM

Shakti-LLM is optimized for resource-constrained environments like smartphones, wearables, and IoT systems. With 2.5 billion parameters and a context length of 4096 tokens, it focuses on high-performance NLP for real-time applications.

Key innovations include:

Shakti is designed for scalability, efficiency, and adaptability, making it suitable for industries like healthcare, finance, and customer service. It supports sliding window attention and Key-Value Caching for efficient long-sequence processing in edge computing environments.

Model Configurations For Various On-device and On-prem Apps

Shakti-100M: Edge Computing Powerhouse

Ideal for:

Benchmarking performance:

Shakti-250M: Domain Intelligence Enabler

Specialized Training:

Perfect for:

Benchmarking performance:

Shakti-500M: Enterprise Workflow Optimizer

Technical Features:

Suited for:

Performance Metrics:

Shakti-1B: Multimodal Intelligence Hub (Coming Soon...)

Data Processing Capabilities:

Specialized Features:

Enterprise Use Cases:

Shakti-2.5B: Enterprise Scale Solution

Designed for:

Advanced Capabilities:

Technical Innovations:

Shakti-5B: Analytics Powerhouse (Coming Soon...)

Core Strengths:

Specialized for:

Shakti-8B: Advanced Research Platform (Coming Soon...)

Technical Specifications:

Engineered for:

Key Advantages

SHAKTI MODEL - SERIES

Fairness, Accountability, and Transparency

1. Data Filtering: Remove sensitive information; add synthetic data to ensure fairness. 2. Compliance: Adhere to ethical guidelines, especially in sensitive sectors. 3. Audits: Ongoing audits and assessments for bias mitigation.

Data Preparation & Quality Control

1. Filtering & Augmentation: Remove sensitive data; fill gaps with synthetic data. 2. Enterprise Data Mapping: Ensure Shakti outputs align with specific business needs. 3. Multi-Domain Data: Incorporate diverse data types for well-rounded understanding.

LLMops & Lifecycle Management

1. Human-in-the-Loop: Continuous quality assurance and adjustments. 2. Continuous Monitoring: Retrain and update model using new data. 3. Version Control: Ensure traceable, consistent deployment. 4. Scalable Pipelines: Automate data, training, and deployment processes.

Training Methodologies

1. Supervised Fine-Tuning (SFT) Improve context-based responses in industries like healthcare and finance. 2. RLHF: Enhance user alignment and helpfulness in Shakti-500м. 3. DPO: Efficient user-aligned output tuning in Shakti-100M & Shakti- 250M. 4. In-House Training Framework: Optimize and scale model training for enterprise use.

Evaluation & Quality Metrics

1. DeepEval: Open-source tool to measure faithfulness and perplexity. 2. Common Metrics: BLEU, ROUGE, FI, Precision, Recall, BERT, MMLU. 3. Ground Truth Analysis: Verify outputs against reference data. 4.LLMHarness (Open Source) Evaluate reasoning and factual accuracy. 5. HaluMon: Detect and minimize hallucinations

Guardrails & Safety Mechanisms

1. HaluMon: Monitor hallucinations for factual reliability. 2. Prompt Tuning & Query Expansion: Maintain ethical, accurate responses. 3. Prompt Testing & Monitoring: Assess prompt changes for quality assurance.

Responsible Deployment & Ethical Use

1.In-House & Open-Source Tools: Monitor outputs and ensure rellability. 2. HaluMon & LLM Harness: Maintain ethical outputs with focus on accuracy. 3. Compliance Standards: Responsible deployment, especially in sensitive sectors.

Shakti Benchmarks

Shakti-100M: Edge Computing Powerhouse

Category Benchmark Shakti-100M Boomer-634M [1] SmolLM-135M [2] SmolLM-360M [3] AMD-Llama-135m [4]
Popular Aggregate Benchmark
MMLU (5-shot)
25.56
25.23
30.2
34.4
23.02
BigBenchHard(0- Shot)
30.12
21.11
23
24.4
18.71
IFEval
24.3
22.22
15.9
19.8
22
Language Understanding
Hellaswag (5-shot)
51.34
34.08
41.2
51.8
30.48
Anli (7-shot)
21.34
27.5
30.73
Reasoning
Piqa (5-shot)
69.2
62.57
68.4
71.6
64.20
OpenbookQA (10- shot)
37.9
35.76
34
37.2
30.73
Truthfulqa (MC2) (10-shot)
29.2
27.57
22.56
WinoGrande (5-shot)
61.3
51.07
51.3
52.8
50.12
ARC Challenge
45.8
62.57
42.4
50.1
43.64
SQuAD
31.5
57.5
25
MedQA
28.3
14
11.02
12.36
15.57
GPQA
14.9
12.1
9.89
11
12.4
Bool Q
29.4
22.9
17.3
21.3
23.54
SocialQA
23.4
14.5
16.9
19
19.1
CommonsenseQA
35.8
29
32.7
35.3
22.56
Factual Knowledge
Trivia QA (5-shot)
15.3
2.73
4.3
9.1
7.54
Math
GSM8K (8 shot)
9.2
0.91
1
1.6
MATH (O shot)
13.9
23.38
14
19
20.64
Coding and Programming
Humaneval
7.8
5.1

Shakti-250M: Domain Intelligence Enabler

Category Benchmark Shakti-250M Boomer-1B [1] Boomer-634M [2] Qwen2.5-0.5B [3] SmolLM-360M [4] Llama 3.2 1B [5]
Popular Aggregate Benchmark
MMLU (5-shot)
28.98
25.92
25.23
47.5
34.4
32.2
BigBenchHard(0- Shot)
13.75
28.65
21.11
20.3
24.4
30.93
IFEval
12.83
23.81
22.22
27.9
19.8
59.5
Language Understanding
Hellaswag (5-shot)
29.96
31.66
34.08
52.1
51.8
41.2
Anli (7-shot)
33.40
32.57
27.5
26.85
22.56
Reasoning
Piqa (5-shot)
63.22
60.78
62.57
72.50
71.6
80.64
OpenbookQA (10- shot)
16.60
22.56
35.75
30.73
37.2
37
Truthfulqa (MC2) (10-shot)
20.69
25.69
27.57
40.2
30.7
WinoGrande (5-shot)
52.97
45.79
51.07
56.3
52.8
60
ARC Challenge
41.20
22.35
62.57
35.6
50.1
32.8
SQuAD
23.25
67
57.5
52.94
49.2
Factual Knowledge
Trivia QA (5-shot)
1.68
25.25
2.73
12.5
9.1
25.69
Math
GSM8K (8 shot)
2.35
1.5
0.91
41.6
44.4
MATH (O shot)
21.71
23.38
19.5

Shakti-500M: Enterprise Workflow Optimizer

Category Benchmark Shakti-500M Boomer-1B [1] Boomer-634M [2] Qwen2.5-0.5B [3] OLMo 1B [4] Llama 3.2 1B [5]
Popular Aggregate Benchmark
MMLU (5-shot)
26.90
25.92
25.23
47.5
32.2
BigBenchHard(0- Shot)
29.1
28.65
21.11
20.3
30.93
IFEval
26.62
23.81
22.22
27.9
59.5
Language Understanding
Hellaswag (5-shot)
45.53
31.66
34.08
52.1
62.5
41.2
Anli (7-shot)
36.70
32.57
27.5
26.85
22.54
Reasoning
Piqa (5-shot)
74.59
60.78
62.57
72.50
73.7
80.64
Med MCQA (2-shot)
21.61
17.56
37.50
42.5
37.57
OpenbookQA (10-shot)
39.80
22.56
35.76
30.73
46.4
37
WinoGrande (5-shot)
59.67
45.79
51.07
56.3
58.9
60
SQuAD
71.40
67
57.5
52.94
49.2
Factual Knowledge
Trivia QA (5-shot)
31.11
25.25
27.6
12.5
25.69
Math
GSM8K (8 shot)
20.92
1.5
0.91
41.6
44.4
MATH (O shot)
26.97
23.38
19.5

Shakti-2.5B: Enterprise Scale Solution

Category Benchmark Sandlogic SLM PHi-3-Mini-4k [1] PHi-3.5-Mini [2] Sarvam-V1 [4] Llama 3 8B [3]
Popular Aggregate Benchmark
MMLU (5-shot)
71.7
68.8
69
36.2
66.5
BigBenchHard(0- Shot)
58.2
71.7
69
31.2
51.5
IFEval
75.4
59.2
77.4
Language Understanding
Hellaswag (5-shot)
52.4
76.7
69.4
48
71.1
Anli (7-shot)
55.2
52.8
49.2
32
57.3
Reasoning
Piqa (5-shot)
86.2
84.2
81
78
75.7
OpenbookQA (10- shot)
36.8
83.2
79.2
31.4
82.6
Truthfulqa (MC2) (10-shot)
68.4
65
64
48
63.1
WinoGrande (5-shot)
76.7
70.8
68.5
67
65
ARC Challenge
69.3
78.6
87.4
SQuAD
63.5
67.7
MedQA
60.3
53.8
50.6
38.1
60.5
GPQA
31.2
32.8
31.9
Bool Q
61.10
77.6
78
59.7
80.9
SocialQA
79.2
76.6
74.7
62
73.9
Factual Knowledge
Trivia QA (5-shot)
58.2
64
61.5
48.6
67.7
Math
GSM8K (8 shot)
72.2
77.7
86.2
MATH (O shot)
46.7
48
44.2
Coding and Programming
Humaneval
35.8
59.1
68.2
37.1
60.4