Mostly wars against indians, and apparently some small ongoing skirmishes involving Mexico: https://en.wikipedia.org/wiki/List_of_wars_involving_the_United_States
Mexican American War is kinda too far ahead? I’m not sure Americans saw a civil war as “imminent” then.
Not sure about that, but the built in adblock and privacy featurres won’t be killed by V3.
If you’re talking about the lastest gen desktop CPUs, they just clocked them too high.
This has been an ongoing problem ever since, like, Ivy Bridge/the 3000 series… and yes, probably has to do with management and marketing decisions tbh, so they can be 2% ahead of AMD in some stupid benchmark. AMD is guilty of this too, and you can see what “sanely” clocked chips look like with their X3D series.
That’s a good, concise explanation.
What’s amazing is that, in the backdrop of all this, antagonizing Iran is seen as acceptable and a good idea. It’d be like in the US repeatedly tried to invade Europe or a neighbor or something right before the civil war (which they kinda did, I know… but not like that).
Daily plug for Cromite: https://github.com/uazo/cromite
Chrome, but it doesn’t suck, doesn’t track you, and it has good, fast native adblock.
Also in some linux repos now. I know its in CachyOS. And it’s on Android, too.
I bought AMD at $8 a share (and still hold it, and Xilinx that got folded into it), and I am buying some Intel soon.
The price is right.
I may be an idiot, but I was right once.
Precisely.
Developing one MMO forever is not a great strategy, and I’d argue they aren’t executing it like the Warframe devs (which is its direct competitor I guess).
I am glad I didn’t lol. Still not, the datacenter GPUs are something else, and so is their multi-chip design prowess.
It would also be another thing is this was a “hero” CEO.
But… what has Bungie done that’s interesting besides Destiny? Was his plan was to just keep doing that?
AMD is super hot right now. Not in a good way.
I bought AMD at $8/share (and am still holding it), and I’m getting a similar vibe from Intel now…
Time to buy?
I’m pretty sure the US gov views them as too big to fail. Surely they can’t mess up Xe, Falcon Shores, and the foundry business, right?
RIGHT!?
Eh, Elon Musk and the crypo industry are not “Big Tech” to me.
Peter Thiel kinda is, but he’s also kinda a black sheep. And their support of Republicans isn’t exactly surprising.
I’m all for this…
But aren’t the Democrats also kinda the party of Big Tech?
It would be amazing of campaigns start having a Fediverse presence, but still.
Reddit still has niches that (unfortunately) exist nowhere else, probably won’t exist anywhere else soon due to the need for foot traffic, and are tolerable as long as old.reddit.com stays up.
And it’s the lesser evil over Discord.
Lemmy is of course 1000x better, but it doesn’t matter if your niche there is a ghost town.
I feel like they have the farm bet on Falcon Shores and (to a lesser extend) the Xe line now, and of course the foundry.
It’d be great if the bad rumors and delays would stop… yeah…
Some services will use glorified RAG to put more current info in the context.
But yeah, if it’s just the raw model, I’m not sure what they were expecting.
8GB or 4GB?
Yeah you should get kobold.cpp’s rocm fork working if you can manage it, otherwise use their vulkan build.
llama 8b at shorter context is probably good for your machine, as it can fit on the 8GB GPU at shorter context, or at least be partially offloaded if its a 4GB one.
I wouldn’t recommend deepseek for your machine. It’s a better fit for older CPUs, as it’s not as smart as llama 8B, and its bigger than llama 8B, but it just runs super fast because its an MoE.
Oh I got you mixed up with the other commenter, apologies.
I’m not sure when llama 8b starts to degrade at long context, but I wanna say its well before 128K, and where other “long context” models start to look much more attractive depending on the task. Right now I am testing Amazon’s mistral finetune, and it seems to be much better than Nemo or llama 3.1 out there.
4 core i7, 16gb RAM and no GPU yet
Honestly as small as you can manage.
Again, you will get much better speeds out of “extreme” MoE models like deepseek chat lite: https://huggingface.co/YorkieOH10/DeepSeek-V2-Lite-Chat-Q4_K_M-GGUF/tree/main
Another thing I’d recommend is running kobold.cpp instead of ollama if you want to get into the nitty gritty of llms. Its more customizable and (ultimately) faster on more hardware.