I believe this is the referenced article:
I believe this is the referenced article:
I think what they mean is that ML models generally don’t directly store their training data, but that they instead use it to form a compressed latent space. Some elements of the training data may be perfectly recoverable from the latent space, but most won’t be. It’s not very surprising as a result that you can get it to reproduce copyrighted material word for word.
Not sure what other people were claiming, but normally the point being made is that it’s not possible for a network to memorize a significant portion of its training data. It can definitely memorize significant portions of individual copyrighted works (like shown here), but the whole dataset is far too large compared to the model’s weights to be memorized.
The big thing you get with frameworks is super simple repairability. This means service manuals, parts availability, easy access to components like the battery, RAM, ssd, etc. Customizable ports are also a nice feature. You can even upgrade the motherboard later down the line instead of buying a whole new laptop.
I haven’t read the article myself, but it’s worth noting that in CS as a whole and especially ML/CV/NLP, selective conferences are generally seen as the gold standard for publications compared to journals. The top conferences include NeurIPS, ICLR, ICML, CVPR for CV and EMNLP for NLP.
It looks like the journal in question is a physical sciences journal as well, though I haven’t looked much into it.
I’m curious what field you’re in. I’m in computer vision and ML and most conferences have clauses saying not to use ChatGPT or other LLM tools. However, most of the folks I work with see no issue with using LLMs to assist in sentence structure, wording, etc, but they generally don’t approve of using LLMs to write accuracy critical sections (such as background, or results) outside of things like rewording.
I suspect part of the reason conferences are hesitant to allow LLM usage has to do with copyright, since that’s still somewhat of a gray area in the US AFAIK.
Also one very important aspect of this is that it must be possible to backpropagate the discriminator. If you just have access to inference on a detector of some kind but not the model weights and architecture itself, you won’t be able to perform backpropagation and therefore can’t generate gradients to update your generator’s weights.
That said, yes, GANs have somewhat fallen out of favor due to their relatively poor sample diversity compared to diffusion models.
For reference, ICML is one of the most prestigious machine learning conferences alongside ICLR and NeurIPS.
I’m a researcher in ML and that’s not the definition that I’ve heard. Normally the way I’ve seen AI defined is any computational method with the ability to complete tasks that are thought to require intelligence.
This definition admittedly sucks. It’s very vague, and it comes with the problem that the bar for requiring intelligence shifts every time the field solves something new. We sort of go “well, given these relatively simple methods could solve it, I guess it couldn’t have really required intelligence.”
The definition you listed is generally more in line with AGI, which is what people likely think of when they hear the term AI.