Skip to main content

The rise of large language models (LLMs) has revolutionised the AI field. How? By offering unprecedented capabilities in natural language processing, text generation, and more. However, the degree of openness in these models—whether open source, open weights, or restricted weights—significantly impacts their accessibility, usage, and development.

This post delves into these three distinct concepts, clarifying their nuances and implications for various stakeholders.

Open-source language models

Open-source LLMs are characterised by the public availability of their source code under permissive licences (like Apache 2.0 or MIT). This openness grants developers and researchers unrestricted access to the model’s inner workings, including its:

  1. architecture;
  2. training algorithms (computer instructions); and
  3. hyperparameters (settings to adjust the model’s performance).

Advantages

  • Transparency and trust: Open-source models promote transparency by:
    • allowing external scrutiny of the code;
    • fostering trust in the model’s development process; and
    • mitigating concerns about hidden biases or vulnerabilities.
  • Collaboration and innovation: The open-source approach encourages collaboration among researchers and developers worldwide, accelerating innovation and collective improvement of the model.
  • Customisability: Developers can freely modify and adapt open-source models to suit specific use cases, tailoring them to diverse applications and domains.
  • Community-driven development: Open-source communities actively contribute to the model’s development, fixing bugs, enhancing features, and ensuring ongoing support and maintenance.

Examples

Legal and commercial considerations

Open-source LLMs are typically released under permissive licences that allow for broad usage and modification, but often come with some restrictions. These licences may require users to:

  1. attribute the original creators;
  2. share any modifications they make; or
  3. adhere to certain ethical guidelines.

The permissive nature of these licences fosters widespread adoption and innovation. However, it can also pose challenges for monetisation, as developers may struggle to generate revenue directly from the model itself.

Open-weights language models

Open-weight LLMs, while not necessarily open source, make their pre-trained model weights publicly accessible. These weights represent the learned parameters that define the model’s behaviour and performance. While the training code and data might not be available, open weights enable researchers and developers to experiment with the model, fine-tune it for specific tasks, and gain insights into its internal representations.

Advantages

  • Accessibility and research: Open weights facilitate research by providing a pre-trained starting point, saving the substantial computational resources required for training large-scale models from scratch.
  • Reproducibility: Open weights enable researchers to reproduce and verify the results of published studies, ensuring transparency and rigour in the scientific community.
  • Customisation and adaptation: Developers can fine-tune open-weight models on domain-specific data to create specialised models without the need for extensive training from scratch.

Examples

Legal and commercial considerations

Open-weight LLMs often fall into a legal grey area, as the model weights themselves may not be copyrightable, but the underlying training data and code might be. This raises questions about ownership, usage rights, and potential liabilities for unintended consequences. While open weights can be a valuable resource for research and development, their commercial use might be restricted by the terms of the original model’s licence or by separate agreements with the model provider.

Restricted-weights language models

Restricted-weight LLMs, often proprietary models, keep their weights confidential and accessible only through APIs or limited licences. This approach is commonly adopted by companies to protect their intellectual property, control access to their advanced AI technology, and potentially monetise their models.

Advantages

  • Commercial viability: Restricted weights allow companies to generate revenue through licensing fees or API usage, incentivising further research and development in the field.
  • Quality control: Companies can maintain stricter quality control over the model’s usage, ensuring it aligns with their ethical guidelines and mitigating potential misuse.

Examples

Legal and commercial considerations

Restricted-weight LLMs are typically governed by strict licensing agreements that limit their use, modification, and distribution. These agreements often include clauses to prevent misuse, protect the model provider’s intellectual property, and ensure compliance with applicable laws and regulations. While this approach offers greater control and potential for monetisation, it can also limit accessibility and hinder research that is not aligned with the model provider’s interests.

How ITLawCo can help

Navigating the complex legal and commercial landscape of language models can be challenging.

At ITLawCo, our team of legal professionals specialises in intellectual property, technology, and data protection law. We can help you understand the legal implications of different openness models, draft and negotiate licensing agreements, and ensure compliance with relevant regulations. Whether you are a developer, researcher, or business owner, ITLawCo can provide the guidance you need to make informed decisions about language models and protect your interests in this rapidly evolving field.

Contact ITLawCo today to learn more about how we can help you navigate the legal complexities of language models.