![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
I game on Linux. Go check protondb for compatibility with your favourite game
I game on Linux. Go check protondb for compatibility with your favourite game
This is the way
What else would you supplement a terrible diet with?
Nobody drinks Lipton in the UK
I recall that there is a USB GPIO dongle which gives you a bunch of pins to play with. You would have to hunt around to find it though.
I had an advantage pro, original, but the rubber fn keys eventually gave up the ghost. Also needed something more portable for work, so have migrated to ergodox now. Still miss that keywell though
I thought they meant Linux Mint, the Debian derivative. Very confused until I read the comments… perhaps I should read the article 😳
The browser will auto generate passwords for you. Along with cross device browser sync, you pretty much never see them
What’s the digital clock in your terminal?
I feel like that’s a way to rapidly run out of spare universes
There are others above who provide instructions warnings against bypassing paywalls (⌐■_■)
Check protondb. It sometimes has workarounds for launcher issues.
This was new to me. Thanks!
Tell me more…
Thank you for the works best editor Bram. :x!
I can see most individuals and SMBs going with specialist “good enough” models which they can run on prem/ locally, leaving the truly huge systems to those with compute to spare. The security model for these MAAS systems is pretty much “trust me bro”. A lot of companies will not want to, or be able to, trust such a system. PI/CID can not be left in the hands of the ai as a service company. They will have to either go on prem, or stand up their own models in their private cloud. Again, this limits model size for orgs, available compute etc. This points to using available models, optimised, etc. OSS FTW (I hope)
Given the pace of oss optimisation, I fully expect the requirements for a gpt3.5 equivalent performance model to be much lower in the coming year. The biggest issues are around training or fine tuning right now. Inference is cheaper, resource wise. For truly large models, the moat is most definitely gpu compute and power constraints. Those who own their own gpu farms will be at an advantage until there is significant increase in cloud gpu capacity - right now, cloud gpu is at a premium, and can also include wait time for access. I don’t expect this to change in the next year or two.
Tl;dr; moat is real, but it’s gpu and power constraints.
Thanks for the link!
We are deep in the technical weeds here. 95% of Linux usage really doesn’t require such humour unfortunately.