• Chinese AI lab DeepSeek launched the DeepSeek-R1 model, rivaling OpenAI in math reasoning and code generation.

  • The model is (in part?*)open-sourced for global research use.

  • Requires way less computing power than competitors like Meta.

  • Competes with OpenAI in critical areas such as mathematical reasoning, code generation, and cost efficiency

  • Overcame U.S. chip export restrictions through optimized architecture.

  • Big Tech are sore loosers

*DeepSeek employs a dual licensing structure for its models. The codebase for DeepSeek-Coder-V2 is released under the MIT License, which allows unrestricted use, modification, and distribution. However, the pre-trained models are governed by the DeepSeek License Agreement, permitting research and commercial use with specific restrictions to prevent harmful applications. While DeepSeek’s models are open in many aspects, some argue they do not fully meet all criteria for being considered “open source” due to these licensing nuances

  • dreadbeef@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    3 days ago

    You are fighting a losing battle. I understand why you think that, but the organization that owns the trademarks of open source do not agree with you (or me). I also disagree with that organization’s definition of open source AI, but they own the legal right to define the meaning of “open source” in the technology trade, the trade in which they own the trade mark. But laws are laws and you either abide by them (as a corp, what are you gonna do?) or don’t (fuck yeah, commit crimes).

    Weights are the only thing you’ll get from “open source” ai. You need to look for stricter legal definitions to meet your understandable criteria.