Perth - University of Western Australia
Friday 19 April 2024
Data privacy remains a primary concern when implementing Large Language Models (LLMs). Utilizing open-source LLMs that users can download and run locally on their devices can address the privacy issue.
In contrast to fine-tuning, in-context learning eliminates the need for human effort in preparing training data. Based on these observations, we conducted an experiment using open-source LLMs with in-context learning for Named Entity Recognition (NER).
Our findings indicate that 2-shot or 3-shot methods yield optimal performance. However, performance on domain-specific datasets remains relatively low, achieving about 47% F1 score.
To simplify the use of open-source LLMs for users across various backgrounds, the NLP-TLP team (Theme 1) has developed a self-hosted platform to facilitate open-source LLM applications with minimal coding requirements and ensures data privacy.