![](/static/253f0d9b/assets/icons/icon-96x96.png)
![](https://lemmy.world/pictrs/image/eb9cfeb5-4eb5-4b1b-a75c-8d9e04c3f856.png)
people who’ve never been laid
That was unnecessary. I know that people with poor social skills have more trouble with romance, but implying that all virgins are socially inept is a harmful stereotype, luck is a big factor in finding relationships.
people who’ve never been laid
That was unnecessary. I know that people with poor social skills have more trouble with romance, but implying that all virgins are socially inept is a harmful stereotype, luck is a big factor in finding relationships.
It’s absolutely amazing, but it is also literally and technologically impossible for that to spontaneously coelesce into reason/logic/sentience.
This is not true. If you train these models on game of Othello, they’ll keep a state of the world internally and use that to predict the next move played (1). To execute addition and multiplication they are executing an algorithm on which they were not explicitly trained (although the gpt family is surprisingly bad at it, due to a badly designed tokenizer).
These models are still pretty bad at most reasoning tasks. But training on predicting the next word is a perfectly valid strategy, after all the best way to predict what comes after the “=” in 1432 + 212 = is to do the addition.
Now let’s look at Office. Open an Excel spreadsheet with tables in any app other than excel. Tables are something that’s just a given in excel, takes 10 seconds to setup, and you get automatic sorting and filtering, with near-zero effort. No, I’m not setting up a DB in an open-source competitor to Access. That’s just too much effort for simple sorting and filtering tasks, and isn’t realistically shareable with other people.
Am I missing something or isn’t it exactly the same thing in libre office ?
I don’t believe that there are solutions that are as complete as team, for video and voice calls it’s among the best.
But it’s so bad for text ! Why do I have to wait for a second when I change channels ? Why does it not support markdown (the partial implementation that it has is arguably worse than no implementation at all) ? Why is the search so bad ?
Convolutional neural networks and plant identifying apps came before chat gpt. Beyond both relying on neural networks they don’t have much in common.
Don’t know why you are down voted it’s a good question.
As a matter of fact it almost happened for search engines in France. Newspaper’s argued that snippets were leading people to not go into their ad infested sites thus losing them revenue.
https://techcrunch.com/2020/04/09/frances-competition-watchdog-orders-google-to-pay-for-news-reuse/
Why would java have an impact on battery performance ? Pretty much all credit cards run java for their encryption algorithms, and they need pretty much no power to run.
I mean yes in the sense that the capture of civilians has a clear military objective. Doesn’t make it less awful.
One genocidal state doesn’t justify another one. There are no good guys in this conflict. That said one side has more bombs than the other so we should be focusing on that side. But please, no justifying war crimes.
Reference counting is a GC though ?
It’s a bad one sure and will leak memory in cases of a cycle which most tracing GC are able to do.
It’s main advantage is that there are no GC pauses.
They are prisoners of Hamas
Hamas is controlling Gaza through a dictatorship yes. But their ideas are popular.
Even more radical political parties like Lion’s den or Palestinian islamic jihad having higher approval ratings.
That said even Fatah is more popular than Hamas.
WordPress wirh custom templates running on a LAMP stack.
Even if 99% of it would evaporate that would still be a ridiculous amount of power.
But Bill Gates proved that diversifying a stock of mainly one company while having that company keep all its value is possible. Elon Musk is horrifyingly rich like it or not. His power and the damage he can do is huge.
I’m afraid that would not be sufficient.
These instructions are a small part of what makes a model answer like it does. Much more important is the training data. If you want to make a racist model, training it on racist text is sufficient.
Great care is put in the training data of these models by AI companies, to ensure that their biases are socially acceptable. If you train an LLM on the internet without care, a user will easily be able to prompt them into saying racist text.
Gab is forced to use this prompt because they’re unable to train a model, but as other comments show it’s pretty weak way to force a bias.
The ideal solution for transparency would be public sharing of the training data.