Software engineer, ML specialist, and Tech Youtuber.
I’m Fotie, widely known as fotiecodes, combining a strong foundation in software engineering and a focus on machine learning, I'm passionate about open-source projects and driving innovation in technology.
Fine-tuning large language models (LLMs) can be resource-intensive, requiring immense computational power. LoRA (Low-Rank Adaptation) and QLoRA (Quantized Low-Rank Adaptation) offer efficient alternatives for training these models while using fewer resources.
I recently embarked on an exciting research journey to explore the vulnerabilities of large language models (LLMs) like ChatGPT, Anthropic Gemini, and similar models. My goal was to see how hackers could exploit them through prompt injection attacks.
In the ever-evolving domain of Natural Language Processing (NLP), supervised fine-tuning has emerged as a game-changing technique for adapting pre-trained Large Language Models (LLMs) to specific tasks. While pre-trained LLMs like those in the GPT family have made significant strides in language comprehension and generation, they often lack optimization for particular applications.