Jump to content

Open-source artificial intelligence

From Wikipedia, the free encyclopedia

Open-source artificial intelligence, as defined by the Open Source Initiative, is an AI system that is freely available to use, study, modify, and share.[1][2] This includes datasets used to train the model, its code, and model parameters, promoting a collaborative and transparent approach to AI development so someone could create a substantially similar result.[3][4]

The debate over what should count as ‘open-source’ given a range of openness among AI projects has been significant. Some large language models touted as open-sourced that only release model-weights (but not training data and code)[5][6] have been criticized as "openwashing"[7] systems that are mostly closed.[8]

Popular open-source artificial intelligence project categories include large language models, machine translation tools, and chatbots.[9] Debate over the benefits and risks of open-sourced AI involve a range of factors like security, privacy and technological advancement.[10][11][8][12]

History

[edit]

The history of open-source artificial intelligence is intertwined with both the development of AI technologies and the growth of the open-source software movement.[13]

1990s: Early development of AI and open-source software

[edit]

The concept of AI dates back to the mid-20th century, when computer scientists like Alan Turing and John McCarthy laid the groundwork for modern AI theories and algorithms.[14] An early form of AI, the natural language processing "doctor" ELIZA, was re-implemented and shared in 1977 by Jeff Shrager as a BASIC program, and soon translated to many other languages. Early AI research focused on developing symbolic reasoning systems and rule-based expert systems.[15][better source needed]

During this period, the idea of open-source software was beginning to take shape, with pioneers like Richard Stallman advocating for free software as a means to promote collaboration and innovation in programming.[16][non-primary source needed] The Free Software Foundation, founded in 1985 by Stallman, was one of the first major organizations to promote the idea of software that could be freely used, modified, and distributed. The ideas from this movement eventually influenced the development of open-source AI, as more developers began to see the potential benefits of open collaboration in software creation, including AI models and algorithms.[17][better source needed][18][better source needed]

In the 1990s, open-source software began to gain more traction,[19][better source needed] the rise of machine learning and statistical methods also led to the development of more practical AI tools. In 1993, the CMU Artificial Intelligence Repository was initiated, with a variety of openly shared software.[20][better source needed]

2000s: Emergence of open-source AI

[edit]

In the early 2000s open-source AI began to take off, with the release of more user-friendly foundational libraries and frameworks that were available for anyone to use and contribute to.[21][better source needed]

OpenCV was released in 2000[22] with a variety of traditional AI algorithms like decision trees, k-Nearest Neighbors (kNN), Naive Bayes and Support Vector Machines (SVM).[23]

2010s: Rise of open-source AI frameworks

[edit]

Open-source deep learning framework as Torch was released in 2002 and made open-source with Torch7 in 2011, and was later augmented by PyTorch, and TensorFlow.[24]

AlexNet was released in 2012.[25]

OpenAI was founded in 2015 with a mission to create open-source artificial intelligence that benefited humanity, at least in part to help with recruitment in the early phases of the organization.[26] GPT-1 was released in 2018.

2020s: Open-weight and open-source generative AI

[edit]

With the announcement of GPT-2 in 2019, OpenAI originally planned to keep the source code of their models private citing concerns about malicious applications.[27] After OpenAI faced public backlash, however, it released the source code for GPT-2 to GitHub three months after its release.[27] OpenAI did not publicly release the source code or pretrained weights for the GPT-3 model.[28] At the time of GPT-3's release GPT-2 was still the most powerful open source language model in the world. 2022 also saw the rise of larger and more powerful models under licenses of varying openness including Meta's OPT.[29]

The Open Source Initiative consulted experts over two years to create a definition of "open-source" that would fit the needs of AI software and models. The most controversial aspect relates to data access, since some models are trained on sensitive data which can't be released. In 2024, they published the Open Source AI Definition 1.0 (OSAID 1.0).[1][2][3] It requires full release of the software for processing the data, training the model and making inferences from the model. For the data, it only requires access to details about the data used to train the AI so others can understand and re-create it.[2]

In 2023, Llama 1 and 2 and Mistral AI's Mistral and Mixtral open-weight models were first released,[30][31] along with MosaicML's smaller open-source models.[32][33] The release of the Llama models was a milestone in generating interest in open-weight and open-source models.[34]

In 2024, Meta released a collection of large AI models, including Llama 3.1 405B, which was competitive with less open models.[35] Meta's description of Llama as open-source has been controversial due to various prohibitions from its software license prohibiting it from being used for some purposes to not knowing what data was used to train the models.[36][37][38]

DeepSeek released their V3 LLM in December 2024, and their R1 reasoning model on 20 January 2025, both as open-weights models under the MIT license.[39][40] This release made widely known how China had been embracing using and building more open AI systems as a way to reduce reliance on western software and gatekeeping as well as to help give its industries access to higher-powered AI more quickly.[41] Projects based in China have since become more widely used around the world as well as they have closed at least some of the gap with leading proprietary American models.[41][42][43]

Since the release of OpenAI's proprietary ChatGPT model in late 2022, there have been only a few fully open (weights, data, code, etc.) large language models released. In September 2025, a Swiss consortium added to this short list by releasing a fully open model named Apertus.[44][45]

In December 2025, the Linux Foundation created the Agentic AI Foundation, which assumed control of some open-source agentic AI protocols and other technologies created by OpenAI, Anthropic and Block.[46][47]

Significance

[edit]

The label ‘open-source’ can provide real benefits to companies looking to hire top talent or attract customers.[4] The debate around "openwashing” (or calling a project open-source when it is mostly closed) has big implications for the success of various projects within the industry.[7]

Open-source artificial intelligence tends to get more support and adoption in countries and companies that do not have their own leading AI model.[4] These open-source projects can help to undercut the position of business and geopolitical rivals with the strongest proprietary models.[4] Europe is a region pursuing openness as a digital sovereignty strategy to try and reduce the leverage that countries like the United States can use in negotiations on various topics like trade.[48][49]

Licenses

[edit]

As of 2025, the license used by the most number of models was Apache 2.0, though a number of the licenses of larger models like Llama 3 have limitations for commercial use among other activities.[34] These partially open-sourced code that contains legal restrictions has scared off some companies from using those models for fear of a future lawsuit[4] or of a change in the terms and conditions.[50] Some of the same fears also exist in the large number of smaller models that do not specify a license.[34]

Applications

[edit]

Healthcare

[edit]

In the healthcare industry, open-source AI has been used in diagnostics, patient care, and personalized treatment options.[51] Open-source libraries have been used for medical imaging for tasks such as tumor detection, improving the speed and accuracy of diagnostic processes.[52][51] Additionally, OpenChem, an open-source library specifically geared toward chemistry and biology applications, enables the development of predictive models for drug discovery, helping researchers identify potential compounds for treatment.[53]

Military

[edit]

Meta's Llama models, which have been described as open-source by Meta, were adopted by U.S. defense contractors like Lockheed Martin and Oracle after unauthorized adaptations by Chinese researchers affiliated with the People's Liberation Army (PLA) came to light.[54][55] The Open Source Initiative and others have contested Meta's use of the term open-source to describe Llama, due to Llama's license containing an acceptable use policy that prohibits use cases including non-U.S. military use.[38] Chinese researchers used an earlier version of Llama to develop tools like ChatBIT, optimized for military intelligence and decision-making, prompting Meta to expand its partnerships with U.S. contractors to ensure the technology could be used strategically for national security.[55] These applications now include logistics, maintenance, and cybersecurity enhancements.[55]

Benefits

[edit]

Privacy and independence

[edit]

A Nature editorial suggests medical care could become dependent on AI models that could be taken down at any time, are difficult to evaluate, and may threaten patient privacy.[11][56] Its authors propose that health-care institutions, academic researchers, clinicians, patients and technology companies worldwide should collaborate to build open-source models for health care of which the underlying code and base models are easily accessible and can be fine-tuned freely with own data sets.[11]

Free speech

[edit]

Open-source models are harder to censor than close-sourced ones.[56]

Collaboration and faster advancements

[edit]

Large-scale collaborations, such as those seen in the development of open-source frameworks like TensorFlow and PyTorch, have accelerated advancements in machine learning (ML) and deep learning.[57] The open-source nature of these platforms also facilitates rapid iteration and improvement, as contributors from across the globe can propose modifications and enhancements to existing tools.[57]

Democratizing access

[edit]

Open-source allows countries and organizations that otherwise do not have access to proprietary models a way to use and invest in AI more cheaply.[4][58][59] This can help to create an ecosystem for other businesses to sell services on top of.[50]

Transparency

[edit]
A video about the importance of transparency of AI in medicine

One benefit of open-source AI is the increased transparency it offers compared to closed-source alternatives.[34] The open-sourced aspects of models allow those algorithms and code to be inspected, which promotes accountability and helps developers understand how a model reaches its conclusions.[34] Additionally, open-weight models, such as Llama and Stable Diffusion, allow developers to directly access model parameters, potentially facilitating the reduced bias and increased fairness in their applications.[34] This transparency can help create systems with human-readable outputs, or "explainable AI", which is a growingly key concern, especially in high-stakes applications such as healthcare, criminal justice, and finance, where the consequences of decisions made by AI systems can be significant.[60][better source needed]

Concerns

[edit]

Quality and security

[edit]

Open sourced models have fewer ways to prevent them from being used for malicious activities.[56] Open-source AI may allow bioterrorism groups to remove fine-tuning and other safeguards of AI models.[10][4][56] One proposed step towards reducing these kinds of harms could be to require models to have their risks evaluated and pass a certain standard before being released.[56] A July 2024 report by the White House found it did not yet find sufficient evidence to restrict revealing model weights,[61] though a number of experts in 2024 seemed more concerned about future advances than present-day capabilities.[56]

Executives that preferred proprietary models in 2025 cited security concerns and performance as major factors why.[62]

Training costs

[edit]

The cost of training datasets for fully open-sourced models can be prohibitively expensive for many projects.[4][50]

See also

[edit]

References

[edit]
  1. ^ a b Williams, Rhiannon; O'Donnell, James (22 August 2024). "We finally have a definition for open-source AI". MIT Technology Review. Retrieved 28 November 2024.
  2. ^ a b c Robison, Kylie (28 October 2024). "Open-source AI must reveal its training data, per new OSI definition". The Verge. Retrieved 28 November 2024.
  3. ^ a b "The Open Source AI Definition – 1.0". Open Source Initiative. Archived from the original on 31 March 2025. Retrieved 14 November 2024.
  4. ^ a b c d e f g h "A battle is raging over the definition of open-source AI". The Economist. 6 November 2024. ISSN 0013-0613. Retrieved 9 December 2025.
  5. ^ "Open Weights: not quite what you've been told". Open Source Initiative. Retrieved 23 September 2025.
  6. ^ Capoot, Ashley; Sigalos, MacKenzie (5 August 2025). "OpenAI releases lower-cost models to rival Meta, Mistral and DeepSeek". CNBC. Retrieved 23 September 2025.
  7. ^ a b Liesenfeld, Andreas; Dingemanse, Mark (5 June 2024). "Rethinking open source generative AI: Open washing and the EU AI Act". The 2024 ACM Conference on Fairness, Accountability, and Transparency. Association for Computing Machinery. pp. 1774–1787. doi:10.1145/3630106.3659005. ISBN 979-8-4007-0450-5.
  8. ^ a b Widder, David Gray; Whittaker, Meredith; West, Sarah Myers (November 2024). "Why 'open' AI systems are actually closed, and why this matters". Nature. 635 (8040): 827–833. Bibcode:2024Natur.635..827W. doi:10.1038/s41586-024-08141-1. ISSN 1476-4687. PMID 39604616.
  9. ^ Castelvecchi, Davide (29 June 2023). "Open-source AI chatbots are booming — what does this mean for researchers?". Nature. 618 (7967): 891–892. Bibcode:2023Natur.618..891C. doi:10.1038/d41586-023-01970-6. PMID 37340135.
  10. ^ a b Sandbrink, Jonas (7 August 2023). "ChatGPT could make bioterrorism horrifyingly easy". Vox. Retrieved 14 November 2024.
  11. ^ a b c Toma, Augustin; Senkaiahliyan, Senthujan; Lawler, Patrick R.; Rubin, Barry; Wang, Bo (December 2023). "Generative AI could revolutionize health care — but not if control is ceded to big tech". Nature. 624 (7990): 36–38. Bibcode:2023Natur.624...36T. doi:10.1038/d41586-023-03803-y. PMID 38036861.
  12. ^ Davies, Pascale (20 February 2024). "What is open source AI and why is profit so important to the debate?". Euronews. Retrieved 28 November 2024.
  13. ^ Morrone, Megan (15 February 2024). "With the rise of AI, the software business redefines "open"". Axios. Retrieved 16 December 2025.
  14. ^ "Appendix I: A Short History of AI | One Hundred Year Study on Artificial Intelligence (AI100)". ai100.stanford.edu. Retrieved 24 November 2024.
  15. ^ Kautz, Henry (31 March 2022). "The Third AI Summer: AAAI Robert S. Engelmore Memorial Lecture". AI Magazine. 43 (1): 105–125. doi:10.1002/aaai.12036. ISSN 2371-9621.
  16. ^ Stallman, Richard. "Why Software Should Be Free - GNU Project - Free Software Foundation". www.gnu.org. Archived from the original on 1 December 2024. Retrieved 24 November 2024.
  17. ^ "The Power of Collaboration: How Open-Source Projects are Advancing AI". kdnuggets.com.
  18. ^ Daigle, Kyle (8 November 2023). "Octoverse: The state of open source and rise of AI in 2023". The GitHub Blog. Retrieved 24 November 2024.
  19. ^ Code, Linux (3 November 2024). "A Brief History of Open Source". TheLinuxCode. Retrieved 24 November 2024.[permanent dead link]
  20. ^ "Topic: (/)". www.cs.cmu.edu. Retrieved 11 September 2025.
  21. ^ Priya (28 March 2024). "The Evolution of Open Source AI Libraries: From Basement Brawls to AI All-Stars". TheGen.AI. Retrieved 24 November 2024.
  22. ^ Pulli, Kari; Baksheev, Anatoly; Kornyakov, Kirill; Eruhimov, Victor (1 April 2012). "Realtime Computer Vision with OpenCV". ACM Queue. 10 (4): 40:40–40:56. doi:10.1145/2181796.2206309.
  23. ^ Adrian Kaehler; Gary Bradski (14 December 2016). Learning OpenCV 3: Computer Vision in C++ with the OpenCV Library. O'Reilly Media. pp. 26ff. ISBN 978-1-4919-3800-3.
  24. ^ Costa, Carlos J.; Aparicio, Manuela; Aparicio, Sofia; Aparicio, Joao Tiago (January 2024). "The Democratization of Artificial Intelligence: Theoretical Framework". Applied Sciences. 14 (18): 8236. doi:10.3390/app14188236. hdl:10362/173131. ISSN 2076-3417.
  25. ^ Lee, Timothy B. (11 November 2024). "How a stubborn computer scientist accidentally launched the deep learning boom". Ars Technica. Retrieved 11 September 2025.
  26. ^ Metz, Rachel (15 March 2024). "OpenAI and the Fierce AI Industry Debate Over Open Source". Bloomberg.com. Retrieved 16 December 2025.
  27. ^ a b Xiang, Chloe (28 February 2023). "OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit". VICE. Retrieved 14 November 2024.
  28. ^ Hao, Karen (23 September 2020). "OpenAI is giving Microsoft exclusive access to its GPT-3 language model". MIT Technology Review. Archived from the original on 5 February 2021. Retrieved 8 December 2024.
  29. ^ Heaven, Will (3 May 2022). "Meta has built a massive new language AI—and it's giving it away for free". MIT Technology Review. Retrieved 26 December 2023.
  30. ^ Nicol-Schwarz, Kai (2 December 2025). "French AI lab Mistral releases new AI models as it looks to keep pace with OpenAI and Google". CNBC. Retrieved 5 December 2025.
  31. ^ Heikkilä, Melissa (2 December 2025). "Mistral unveils new models in race to gain edge in 'open' AI". Financial Times. Retrieved 5 December 2025.
  32. ^ Quach, Katyanna (22 June 2023). "Bigger not always better in AI, boutique models are coming". The Register. Archived from the original on 14 December 2025. Retrieved 28 January 2026.
  33. ^ Quach, Katyanna (27 June 2023). "Databricks snaps up MosaicML to build private AI models". The Register. Archived from the original on 10 November 2025. Retrieved 28 January 2026.
  34. ^ a b c d e f Vake, Domen; Šinik, Bogdan; Vičič, Jernej; Tošić, Aleksandar (5 March 2025). "Is Open Source the Future of AI? A Data-Driven Approach". Applied Sciences. 15 (5): 9–10. doi:10.3390/app15052790. ISSN 2076-3417.
  35. ^ Mirjalili, Seyedali (1 August 2024). "Meta just launched the largest 'open' AI model in history. Here's why it matters". The Conversation. Retrieved 14 November 2024.
  36. ^ Waters, Richard (17 October 2024). "Meta under fire for 'polluting' open-source". Financial Times. Retrieved 14 November 2024.
  37. ^ Edwards, Benj (18 July 2023). "Meta launches Llama 2, a source-available AI model that allows commercial applications". Ars Technica. Archived from the original on 7 November 2023. Retrieved 14 December 2024.
  38. ^ a b Thomas, Prasanth Aby (5 November 2024). "Meta offers Llama AI to US government for national security". CIO. Archived from the original on 14 December 2024. Retrieved 14 December 2024.
  39. ^ Chen, Caiwei (24 January 2025). "How a top Chinese AI model overcame US sanctions". MIT Technology Review. Archived from the original on 25 January 2025. Retrieved 3 February 2025.
  40. ^ Guo, Daya; Yang, Dejian; Zhang, Haowei; Song, Junxiao; Wang, Peiyi; Zhu, Qihao; et al. (18 September 2025). "DeepSeek-R1 incentivizes reasoning in LLMs through reinforcement learning". Nature. 645 (8081): 633–638. Bibcode:2025Natur.645..633G. doi:10.1038/s41586-025-09422-z. PMC 12443585. PMID 40962978.{{cite journal}}: CS1 maint: overridden setting (link)
  41. ^ a b Bloom, Peter (12 February 2025). "DeepSeek: how China's embrace of open-source AI caused a geopolitical earthquake". The Conversation. Retrieved 9 December 2025.
  42. ^ Huang, Raffaele (13 August 2025). "China's Lead in Open-Source AI Jolts Washington and Silicon Valley". The Wall Street Journal. Retrieved 9 December 2025.
  43. ^ Cui, Jasmine; Perlo, Jared (30 November 2025). "More of Silicon Valley is building on free Chinese AI". NBC News. Retrieved 9 December 2025.
  44. ^ Welle, Elissa (3 September 2025). "Switzerland releases an open-weight AI model". The Verge. Retrieved 8 October 2025.
  45. ^ Allen, Matthew (2 September 2025). "Switzerland launches transparent ChatGPT alternative". SWI swissinfo.ch. Retrieved 8 October 2025.
  46. ^ Knight, Will. "OpenAI, Anthropic, and Block Are Teaming Up to Make AI Agents Play Nice". Wired. ISSN 1059-1028. Retrieved 16 December 2025.
  47. ^ Claburn, Thomas (9 December 2025). "Linux Foundation aims to become the Switzerland of AI agents". The Register.
  48. ^ Khalili, Joel. "The Race to Build the DeepSeek of Europe Is On". Wired. ISSN 1059-1028. Retrieved 28 January 2026.
  49. ^ Desmarais, Anna (1 December 2025). "Europe is trying to write a new sovereign AI map. Here's how". Euronews. Retrieved 29 January 2026.
  50. ^ a b c Lin, Belle (21 March 2024). "Open-Source Companies Are Sharing Their AI Free. Can They Crack OpenAI's Dominance?". Wall Street Journal. ISSN 0099-9660. Retrieved 16 December 2025.
  51. ^ a b Esteva, Andre; Robicquet, Alexandre; Ramsundar, Bharath; Kuleshov, Volodymyr; DePristo, Mark; Chou, Katherine; et al. (January 2019). "A guide to deep learning in healthcare". Nature Medicine. 25 (1): 24–29. Bibcode:2019NatMe..25...24E. doi:10.1038/s41591-018-0316-z. ISSN 1546-170X. PMID 30617335.
  52. ^ Ashraf, Mudasir; Ahmad, Syed Mudasir; Ganai, Nazir Ahmad; Shah, Riaz Ahmad; Zaman, Majid; Khan, Sameer Ahmad; et al. (2021). "Prediction of Cardiovascular Disease Through Cutting-Edge Deep Learning Technologies: An Empirical Study Based on TENSORFLOW, PYTORCH and KERAS". In Gupta, Deepak; Khanna, Ashish; Bhattacharyya, Siddhartha; Hassanien, Aboul Ella; Anand, Sameer; Jaiswal, Ajay (eds.). International Conference on Innovative Computing and Communications. Advances in Intelligent Systems and Computing. Vol. 1165. Singapore: Springer. pp. 239–255. doi:10.1007/978-981-15-5113-0_18. ISBN 978-981-15-5113-0.
  53. ^ Korshunova, Maria; Ginsburg, Boris; Tropsha, Alexander; Isayev, Olexandr (25 January 2021). "OpenChem: A Deep Learning Toolkit for Computational Chemistry and Drug Design". Journal of Chemical Information and Modeling. 61 (1): 7–13. doi:10.1021/acs.jcim.0c00971. ISSN 1549-9596. PMID 33393291.
  54. ^ Pomfret, James; Pang, Jessie; Pomfret, James; Pang, Jessie (1 November 2024). "Exclusive: Chinese researchers develop AI model for military use on back of Meta's Llama". Reuters. Retrieved 16 November 2024.
  55. ^ a b c Roth, Emma (4 November 2024). "Meta AI is ready for war". The Verge. Retrieved 16 November 2024.
  56. ^ a b c d e f Piper, Kelsey (2 February 2024). "Should we make our most powerful AI models open source to all?". Vox. Retrieved 16 December 2025.
  57. ^ a b Dean, Jeffrey (1 May 2022). "A Golden Decade of Deep Learning: Computing Systems & Applications". Daedalus. 151 (2): 58–74. doi:10.1162/daed_a_01900. ISSN 0011-5266.
  58. ^ Hassri, Myftahuddin Hazmi; Man, Mustafa (7 December 2023). "The Impact of Open-Source Software on Artificial Intelligence". Journal of Mathematical Sciences and Informatics. 3 (2). doi:10.46754/jmsi.2023.12.006. ISSN 2948-3697.
  59. ^ Solaiman, Irene (24 May 2023). "Generative AI Systems Aren't Just Open or Closed Source". Wired. Archived from the original on 27 November 2023. Retrieved 20 July 2023.
  60. ^ Gujar, Praveen. "Council Post: Building Trust In AI: Overcoming Bias, Privacy And Transparency Challenges". Forbes. Retrieved 27 November 2024.
  61. ^ O'Brien, Matt (30 July 2024). "White House says no need to restrict open-source AI, for now". Associated Press. PBS News. Retrieved 14 November 2024.
  62. ^ Kahn, Jeremy. "Why China's open source AI models are eating the world". Fortune. Retrieved 29 January 2026.
[edit]