- Mailing Lists
- Contributors
- Beyond LLM guidelines and generated contributions
Archives
- By thread 1419
-
By date
- August 2019 59
- September 2019 118
- October 2019 165
- November 2019 97
- December 2019 35
- January 2020 58
- February 2020 204
- March 2020 121
- April 2020 172
- May 2020 50
- June 2020 158
- July 2020 85
- August 2020 94
- September 2020 193
- October 2020 277
- November 2020 100
- December 2020 159
- January 2021 38
- February 2021 87
- March 2021 146
- April 2021 73
- May 2021 90
- June 2021 86
- July 2021 123
- August 2021 50
- September 2021 68
- October 2021 66
- November 2021 74
- December 2021 75
- January 2022 98
- February 2022 77
- March 2022 68
- April 2022 31
- May 2022 59
- June 2022 87
- July 2022 141
- August 2022 38
- September 2022 73
- October 2022 152
- November 2022 39
- December 2022 50
- January 2023 93
- February 2023 49
- March 2023 106
- April 2023 47
- May 2023 69
- June 2023 92
- July 2023 64
- August 2023 103
- September 2023 91
- October 2023 101
- November 2023 94
- December 2023 46
- January 2024 75
- February 2024 79
- March 2024 104
- April 2024 63
- May 2024 40
- June 2024 160
- July 2024 80
- August 2024 70
- September 2024 62
- October 2024 121
- November 2024 117
- December 2024 89
- January 2025 59
- February 2025 104
- March 2025 96
- April 2025 107
- May 2025 52
- June 2025 72
- July 2025 60
- August 2025 81
- September 2025 124
- October 2025 63
- November 2025 22
Contributors
Beyond LLM guidelines and generated contributions
Hi everybody,
First, thank you for the great time at the OCA Days and I hope you are enjoying the OXP.
The ongoing discussion on LLM guidelines ~~had me triggered~~ got me thinking.
I would like to give a different
perspective on LLMs and what we as a community should care
about.
Let's talk about digital sovereignty and power-consumption. Most LLMs used by developers made by big tech companies [0]. These companies have built huge data centers that use an incredible amount of power to run LLMs. They have plans to build nuclear reactors to support the power demand. Anyone who is seriously concerned with digital sovereignty and cares about the environment knows that we need to become independent of big tech and use less energy.
> Supporting AI tools by big tech is a step into the wrong direction.
Moreover, the AI hype is igniting the next level of data collecting and user tracking. The data we generate when interacting with an AI chat and the like is incredibly valuable. As of now the tech bros still have no viable business model and burn money at an incredible amount [1].
Every week a new model is released and we feel pressured to use AI. Let us take a break and take it slow moving forward. We are not forced to do anything (at least for now).
I ask the OCA members to do better than creating an OpenAI account, installing Claude desktop or click the Copilot icon. Use tools and create workflows that respect your privacy and the privacy of others. Make sure your stack uses less energy over time, works independent and is controllable.
> Let us create guides on how to do better.
Here is an example, not show off, but
simply to share:
The LLMs I am using are hosted by Infomaniak in Switzerland [2].
They are using "green" energy. I pay per token and I know where
the data is stored. On the command line, I am using the LLM cli [3] by Simon
Willison (LLM reseacher). Every LLM tool that supports the
OpenAI API standard can be connected to an Infomaniak LLM. There
are great scripts that help your write better code [4]. I don't
have any kind of IDE integration [5]. I like to write code and
not press tab tab tab ...
Is somebody interested to create guides on how to work (better) with LLMs and create contributions that help the ecosystem?
Kind regards,
Janik
[0]:
https://en.wikipedia.org/wiki/Big_Tech
[1]: https://www.wheresyoured.at/the-haters-gui/
[2]: https://www.infomaniak.com/en/hosting/ai-tools
[3]: https://llm.datasette.io/
[4]:
https://notes.billmill.org/blog/2025/07/An_AI_tool_I_find_useful.html
[5]:
https://janikvonrotz.ch/2025/01/27/work-with-llms-on-the-command-line/
by Janik von Rotz - 11:05 - 19 Sep 2025