Software engineer, ML specialist, and Tech Youtuber.

I’m Fotie, widely known as fotiecodes, combining a strong foundation in software engineering and a focus on machine learning, I'm passionate about open-source projects and driving innovation in technology.

LoRA and QLoRA: Simple Fine-Tuning Techniques Explained

Fine-tuning large language models (LLMs) can be resource-intensive, requiring immense computational power. LoRA (Low-Rank Adaptation) and QLoRA (Quantized Low-Rank Adaptation) offer efficient alternatives for training these models while using fewer resources.

Customizing LLMs through Supervised Fine-tuning

In the ever-evolving domain of Natural Language Processing (NLP), supervised fine-tuning has emerged as a game-changing technique for adapting pre-trained Large Language Models (LLMs) to specific tasks. While pre-trained LLMs like those in the GPT family have made significant strides in language comprehension and generation, they often lack optimization for particular applications.

Stay up to date

Get notified when I publish something new, and unsubscribe at any time.

Work

  1. Company
    Ranx.ai
    Role
    Machine Learning Spec.
    Date
  2. Company
    Moonblocks LLC
    Role
    Software Engineer
    Date
  3. Company
    Gohze
    Role
    Software Developer
    Date
  4. Company
    N-Tec Labs
    Role
    Machine Learning Spec.
    Date
  5. Company
    scolaryx
    Role
    Software Engineer Intern
    Date
Download CV