SRK Launches Tamil Domain LLM: Tamil-Llama
Kaggle master Abhinand Balachandran has launched "Tamil-Llama," an Indian LLM (Large Language Model) specifically designed to enhance the Tamil language domain. This AI model is built on top of Meta's Llama 2.
Tamil-Llama is meticulously crafted, integrating additional Tamil language tokens and utilizing the LoRA method for smooth and efficient training.
Kaggle master Sudalai Rajkumar (SRK) posted information about this model on LinkedIn and congratulated Balachandran on his achievement.
This model takes pride in its different versions with 7 billion and 13 billion parameters, marking a significant advancement in the Tamil language AI field and potentially becoming the most advanced open-source LLM tailored for Indian languages to date.
The model offers four different iterative versions: Tamil LLaMA 7B, 13B, 7B Instruct, and 14B Instruct, catering to various complexities and requirements.
The research paper explains that during the training phase, the model's vocabulary has been expanded to include 16,000 Tamil language tokens to supplement the initial 32,000 tokens, enhancing language inclusivity.
The dataset used during the fine-tuning phase is easily accessible within the repository, promoting transparency and collaboration within the AI community.
The project was completed within two months. Balachandran explains how he balanced managing GPU expenses and navigating the complex technical challenges of building state-of-the-art language models, proving his commitment to the process.
With a vision to drive Indian languages towards the forefront of AI, Balachandran envisions Tamil-LLaMA to be more than just a technological breakthrough.