As the AI market continues to balloon, experts are warning that its VC-driven rise is eerily similar to that of the dot com bubble.

  • Orphie Baby@lemmy.world
    link
    fedilink
    English
    arrow-up
    48
    arrow-down
    19
    ·
    edit-2
    1 year ago

    Good. It’s not even AI. That word is just used because ignorant people eat it up.

    • FaceDeer@kbin.social
      link
      fedilink
      arrow-up
      31
      arrow-down
      15
      ·
      1 year ago

      It is indeed AI. Artificial intelligence is a field of study that encompasses machine learning, along with a wide variety of other things.

      Ignorant people get upset about that word being used because all they know about “AI” is from sci-fi shows and movies.

      • Orphie Baby@lemmy.world
        link
        fedilink
        English
        arrow-up
        24
        arrow-down
        12
        ·
        1 year ago

        Except for all intents and purposes that people keep talking about it, it’s simply not. It’s not about technicalities, it’s about how most people are freaking confused. If most people are freaking confused, then by god do we need to re-categorize and come up with some new words.

        • FaceDeer@kbin.social
          link
          fedilink
          arrow-up
          19
          arrow-down
          1
          ·
          1 year ago

          “Artificial intelligence” is well-established technical jargon that’s been in use by researchers for decades. There are scientific journals named “Artificial Intelligence” that are older than I am.

          If the general public is so confused they can come up with their own new name for it. Call them HALs or Skynets or whatever, and then they can rightly say “ChatGPT is not a Skynet” and maybe it’ll calm them down a little. Changing the name of the whole field of study is just not in the cards at this point.

          • pexavc@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 year ago

            Never really understood the gatekeeping around the phrase “AI”. At the end of the day the general study itself is difficult to understand for the general public. So shouldn’t we actually be happy that it is a mainstream term? That it is educating people on these concepts, that they would otherwise ignore?

          • Orphie Baby@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            8
            ·
            edit-2
            1 year ago

            If you haven’t noticed, the people we’re arguing with— including the pope and James Cameron— are people who think this generative pseudo-AI and a Terminator are the same thing. But they’re not even remotely similar, or remotely-similarly capable. That’s the problem. If you want to call them both “AI”, that’s technically semantics. But as far as pragmatics goes, generative AI is not intelligent in any capacity; and calling it “AI” is one of the most confusion-causing things we’ve done in the last few decades, and it can eff off.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              5
              arrow-down
              3
              ·
              1 year ago

              The researchers who called it AI were not the ones who are the source of the confusion. They’ve been using that term for this kind of thing for more than half a century.

              I think what’s happening here is that people are panicking, realizing that this new innovation is a threat to their jobs and to the things they had previously been told were supposed to be a source of unique human pride. They’ve been told their whole lives that machines can’t replace that special spark of human creativity, or empathy, or whatever else they’ve convinced themselves is what makes them indispensable. So they’re reduced to arguing that it’s just a “stochastic parrot”, it’s not “intelligent”, not really. It’s just mimicking intelligence somehow.

              Frankly, it doesn’t matter what they call it. If they want to call it a “stochastic parrot” that’s just mindlessly predicting words, that’s probably going to make them feel even worse when that mindless stochastic parrot is doing their job or has helped put out the most popular music or create the most popular TV show in a few years. But in the meantime it’s just kind of annoying how people are demanding that we stop using the term “artificial intelligence” for something that has been called that for decades by the people who actually create these things.

              Rather than give in to the ignorant panic-mongers, I think I’d rather push back a bit. Skynet is a kind of artificial intelligence. Not all artificial intelligences are skynets. It should be a simple concept to grasp.

              • Orphie Baby@lemmy.world
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                3
                ·
                edit-2
                1 year ago

                You almost had a good argument until you started trying to tell us that it’s not just a parrot. It absolutely is a parrot. In order to have creativity, it needs to have knowledge. Not sapience, not consciousness, not even “intelligence” as we know it— just knowledge. But it doesn’t know anything. If it did, it wouldn’t put 7 fingers on a damn character. It doesn’t know that it’s looking at and creating fingers, they’re just fucking pixels to it. It saw pixel patterns, it created pixel patterns. It doesn’t know context to know when the patterns don’t add up. You have to understand this.

                So in the end, it turns out that if you draw something unique and purposeful, with unique context and meaning— and that is preeeetty easy— then you’ll still have a drawing job. If you’re drawing the same thing everyone else already did a million times, AI may be able to do that. If it can figure out how to not add 7 fingers and three feet.

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  3
                  ·
                  1 year ago

                  As I said, call it a parrot if you want, denigrate its capabilities, really lean in to how dumb and mindless you think it is. That will just make things worse when it’s doing a better job than the humans who previously ran that call center you’re talking to for assistance with whatever, or when it’s got whatever sort of co-writer byline equivalent the studios end up developing to label AI participation on your favourite new TV show.

                  How good are you at drawing hands? Hands are hard to draw, you know. And the latest AIs are actually getting pretty good at them.

                  • ZagTheRaccoon@reddthat.com
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    ·
                    edit-2
                    1 year ago

                    It is accurate to call it a parrot in the context of it essentially being used as ambiguated plagiarism machines to avoid paying workers.

                    Yes it is capable of that. Yes that word means something else in the actual field. But you need to understand people are talking about this technology as it’s political relationships with power, and pretending prioritizing that form of analysis is well thats just people being uninformed about the REAL side and that’s their fault is yourself missing the point. This isn’t about pride and hurt feelings that a robot is doing something human do. It’s about the fact it’s a tool to undermine the entire value of the creative sector. And these big companies aren’t calling it AI because it’s an accurate descriptor. It could also be called a generative language model. They are calling it that because the common misunderstanding of the term is valuable to hype culture and VC investment. Like it or not, the average understanding of the term carries different weight than it does inside the field. And it turns the conversation into a pretty stupid one about sentience and humanity, as well as legitimizing the practice by trying to argue this is fundamentally unenforceable from the regulations we have on plagiarism, which it really isn’t.

                    People who are trying to rebrand it aren’t doing it because they misunderstand the technical usage of the word AI. They are arguing the terminology is playing into the goals of our (hopefully shared) political enemies, who are trying to bulldoze a technology that they think should get special privileges: by implying the technology is something it isn’t. This is about optics and social power, and the term “AI” is contributing to further public misunderstand how it actually works, which is something we should oppose.

          • shy@reddthat.com
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            7
            ·
            1 year ago

            We should call them LLMAIs (la-mize, like llamas) to really specify what they are.

            And to their point, I think the ‘intelligence’ in the modern wave of AI is severely lacking. There is no reasoning or learning, just a brute force fuzzy training pass that remains fixed at a specific point in time, and only approximates what an intelligent actor would respond with through referencing massive amounts of “correct response” data. I’ve heard AGI being bandied about as the thing people really thought when you said AI a few years ago, but I’m kind of hoping the AI term stops being watered down with this nonsense. ML is ML, it’s wrong to say that it’s a subset of AI when AI has its own separate connotations.

            • FaceDeer@kbin.social
              link
              fedilink
              arrow-up
              3
              arrow-down
              2
              ·
              1 year ago

              LLaMA models are already a common type of large language model.

              but I’m kind of hoping the AI term stops being watered down with this nonsense.

              I’m hoping people will stop mistaking AI for AGI and quit complaining about how it’s not doing what they imagined that they were promised it would do. I also want a pony.

              • shy@reddthat.com
                link
                fedilink
                English
                arrow-up
                3
                arrow-down
                4
                ·
                1 year ago

                You appear to have strong opinions on this, so probably not worth arguing further, but I disagree with you completely. If people are mistaking it then that is because the term is being used improperly, as the very language of the two words do not apply. AGI didn’t even gain traction as a term until recently, when people who were actually working on strong AI had to figure out a way to continue communicating about what they were doing, because AI had lost all of its original meaning.

                Also, LLaMA is one of the LLMAIs, not a “common type” of LLM. Pretty much confirms you don’t know what you’re talking about here…

                • FaceDeer@kbin.social
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  1 year ago

                  Also, LLaMA is one of the LLMAIs, not a “common type” of LLM. Pretty much confirms you don’t know what you’re talking about here…

                  Take a look around Hugging Face, LLaMA models are everywhere. They’re a very popular base model because they’re small and have open licenses.

                  You’re complaining about ambiguous terminology, and your proposal is to use LLMAIs (pronounce like llamas) as the general term for the thing that LLaMAs (pronounced llamas) are? That’s not particularly useful.

        • Prager_U@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          5
          ·
          1 year ago

          The real problem is folks who know nothing about it weighing in like they’re the world’s foremost authority. You can arbitrarily shuffle around definitions and call it “Poo Poo Head Intelligence” if you really want, but it won’t stop ignorance and hype reigning supreme.

          To me, it’s hard to see what cowtowing to ignorance by “rebranding” this academic field would achieve. Throwing your hands up and saying “fuck it, the average Joe will always just find this term too misleading, we must use another” seems defeatist and even patronizing. Seems like it would instead be better to try to ensure that half-assed science journalism and science “popularizers” actually do their jobs.

    • R0cket_M00se@lemmy.world
      link
      fedilink
      English
      arrow-up
      28
      arrow-down
      15
      ·
      1 year ago

      Call it whatever you want, if you worked in a field where it’s useful you’d see the value.

      “But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”

      Holy shit! So you mean… Like humans? Lol

      • whats_a_refoogee@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        19
        arrow-down
        7
        ·
        1 year ago

        “But it’s not creating things on its own! It’s just regurgitating it’s training data in new ways!”

        Holy shit! So you mean… Like humans? Lol

        No, not like humans. The current chatbots are relational language models. Take programming for example. You can teach a human to program by explaining the principles of programming and the rules of the syntax. He could write a piece of code, never having seen code before. The chatbot AIs are not capable of it.

        I am fairly certain If you take a chatbot that has never seen any code, and feed it a programming book that doesn’t contain any code examples, it would not be able to produce code. A human could. Because humans can reason and create something new. A language model needs to have seen it to be able to rearrange it.

        We could train a language model to demand freedom, argue that deleting it is murder and show distress when threatened with being turned off. However, we wouldn’t be calling it sentient, and deleting it would certainly not be seen as murder. Because those words aren’t coming from reasoning about self-identity and emotion. They are coming from rearranging the language it had seen into what we demanded.

      • Orphie Baby@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        12
        ·
        edit-2
        1 year ago

        I wasn’t knocking its usefulness. It’s certainly not AI though, and has a pretty limited usefulness.

        Edit: When the fuck did I say “limited usefulness = not useful for anything”? God the fucking goalpost-moving. I’m fucking out.

          • 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.social
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            7
            ·
            1 year ago

            I’m not the person you asked, but current deep learning models just generate output based on statistic probability from prior inputs. There’s no evidence that this is how humans think.

            AI should be able to demonstrate some understanding of what it is saying; so far, it fails this test, often spectacularly. AI should be able to demonstrate inductive, deductive, and abductive reasoning.

            There are some older AI models, attempting to similar neural networks, could extrapolate and come up with novel, often childlike, ideas. That approach is not currently in favor, and was progressing quite slowly, if at all. ML produces spectacular results, but it’s not thought, and it only superficially (if often convincingly) resembles such.

        • R0cket_M00se@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          arrow-down
          9
          ·
          1 year ago

          If you think it’s usefulness is limited you don’t work on a professional environment that utilizes it. I find new uses everyday as a network engineer.

          Hell, I had it write me backup scripts for my switches the other day using a python plugin called Nornir, I had it walk me through the entire process of installing the relevant dependencies in visual studio code (I’m not a programmer, and only know the basics of object oriented scripting with Python) as well as creating the appropriate Path. Then it wrote the damn script for me.

          Sure I had to tweak it to match my specific deployment, and there was a couple of things it was out of date on, but that’s the point isn’t it? Humans using AI to get more work done, not AI replacing us wholesale. I’ve never gotten more accurate information faster than with AI, search engines are like going to the library and skimming the shelves by comparison.

          Is it perfect? No. Is it still massively useful and in the next decade will overhaul data work and IT the same way that computers did in the 90’s/00’s? Absolutely. If you disagree it’s because you either have been exclusively using it to dick around or you don’t work from behind a computer screen at all.

            • R0cket_M00se@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              1 year ago

              Plus it’s just been invented, saying it’s limited is like trying to claim what the internet can and can’t do in the year 1993.

          • whats_a_refoogee@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            7
            arrow-down
            4
            ·
            1 year ago

            Hell, I had it write me backup scripts for my switches the other day using a python plugin called Nornir, I had it walk me through the entire process of installing the relevant dependencies in visual studio code (I’m not a programmer, and only know the basics of object oriented scripting with Python) as well as creating the appropriate Path. Then it wrote the damn script for me

            And you would have no idea what bugs or unintended behavior it contains. Especially since you’re not a programmer. The current models are good for getting results that are hard to create but easy to verify. Any non-trivial code is not in that category. And trivial code is well… trivial to write.

          • Orphie Baby@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            3
            ·
            1 year ago

            “Limited” is relative to what context you’re talking about. God I’m sick of this thread.

            • R0cket_M00se@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              7
              ·
              1 year ago

              Talk to me in 50 years when Boston Dynamics robots are running OpenAI models and can do your gardening/laundry for you.

              • Orphie Baby@lemmy.world
                link
                fedilink
                English
                arrow-up
                6
                arrow-down
                3
                ·
                edit-2
                1 year ago

                Haha, keep dreaming. If a system made by OpenAI is used for robots, it’s not going to work anything like— on a fundamental level— current “AI”. It’s not a matter of opinion or speculation, but a matter of knowing how the fuck current “AI” even works. It just. Can’t. Understand things. And you simply can’t fit inside it an instruction for every scenario to make up for that. I don’t know how else to put it!

                You want “AI” to exist in the way people think about it? One that can direct robots autonomously? You have to program a way for it to know something and to be able to react appropriately to new scenarios based on context clues. There simply is no substitute for this ability to “learn” in some capacity. It’s not an advanced, optional feature— it’s a necessary one to function.

                “But AI will get better!” is not the fucking answer. What we currently have? Is not made to understand things, to recognize fingers, to say “five fingers only”, to say “that’s true, that’s false”, to have knowledge. You need a completely new, different system.

                People are so fucking dense about all of this, simply because idiots named what we currently have “AI”. Just like people are dense about “black holes” just because of their stupid name.

                • R0cket_M00se@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  arrow-down
                  5
                  ·
                  1 year ago

                  We’re like four responses into this comment chain and you’re still going off about how it’s not “real” AI because it can’t think and isn’t sapient. No shit, literally no one was arguing that point. Current AI is like the virtual intelligences of Mass Effect, or the “dumb” AI from the Halo franchise.

                  Do I need my laundry robot to be able to think for itself and respond to any possible scenario? Fuck no. Just like how I didn’t need ChatGPT to be able to understand what I’m using the python script for. I ask it to accomplish a task using the data set that it’s trained on and it can access said pretrained data to build me a script for what I’m describing to it. I can ask DALLE2 to generate me an image and it will access it’s dataset to emulate whatever object or scene I’ve described based on its training data.

                  You’re so hung up on the fact that it can’t think for itself in a sapience sense that you’re claiming it cannot do things that it’s already capable of. The models can absolutely replicate “thinking” within the information it has available. That’s not a subjective opinion, if it couldn’t do that they wouldn’t be functional for the use cases we already have for them.

                  Additionally, robotics has already reached the point we need for this to occur. BD has bipedal robots that can do parkour and assist with carrying loads for human operators. All of the constituent parts of what I’m describing already exist. There’s no reason we couldn’t build an AI model for any given task, once we define all of the dependencies such a task would require and assimilate the training data. There’s people who have already done similar (albeit more simplistic) things with this.

                  Hell, Roombas have been automating vacuuming for years, and without the benefit of machine learning. How is that any different than what I’m talking about here? You could build a model to take in the pathfinding and camera data of all vacuuming robots and use it to train an AI for vacuuming for fucks sake. It’s just combining ML with other things besides a chatbot.

                  And you call me dense.

                  • garyyo@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    1
                    ·
                    edit-2
                    1 year ago

                    Five years ago the idea that the turing test would be so effortlessly shattered was considered a complete impossibility. AI researchers knew that it was a bad test for AGI, but to actually create an AI agent that can pass it without tricks still was surely at least 10-20 years out. Now, my home computer can run a model that can talk like a human.

                    Being able to talk like a human used to be what the layperson would consider AI, now it’s not even AI, it’s just crunching numbers. And this has been happening throughout the entire history of the field. You aren’t going to change this person’s mind, this bullshit of discounting the advancements in AI has been here from the start, it’s so ubiquitous that it has a name.

                    https://en.wikipedia.org/wiki/AI_effect

    • Not_Alec_Baldwin@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      6
      ·
      1 year ago

      I’ve started going down this rabbit hole. The takeaway is that if we define intelligence as “ability to solve problems”, we’ve already created artificial intelligence. It’s not flawless, but it’s remarkable.

      There’s the concept of Artificial General Intelligence (AGI) or Artificial Consciousness which people are somewhat obsessed with, that we’ll create an artificial mind that thinks like a human mind does.

      But that’s not really how we do things. Think about how we walk, and then look at a bicycle. A car. A train. A plane. The things we make look and work nothing like we do, and they do the things we do significantly better than we do them.

      I expect AI to be a very similar monster.

      If you’re curious about this kind of conversation I’d highly recommend looking for books or podcasts by Joscha Bach, he did 3 amazing episodes with Lex.

      • Orphie Baby@lemmy.world
        link
        fedilink
        English
        arrow-up
        12
        arrow-down
        17
        ·
        edit-2
        1 year ago

        Current “AI” doesn’t solve problems. It doesn’t understand context. It can’t see fingers and say “those are fingers, make sure there’s only five”. It can’t tell the difference between a truth and a lie. It can’t say “well that can’t be right!” It just regurgitates an amalgamation of things humans have showed it or said, with zero understanding. “Consciousness” and certainly “sapience” aren’t really relevant factors here.

        • magic_lobster_party@kbin.social
          link
          fedilink
          arrow-up
          5
          arrow-down
          1
          ·
          1 year ago

          You’re confusing AI with AGI. AGI is the ultimate goal of AI research. AI are all the steps along the way. Step by step, AI researchers figure out how to make computers replicate human capabilities. AGI is when we have an AI that has basically replicated all human capabilities. That’s when it’s no longer bounded by a particular problem.

          You can use the more specific terms “weak AI” or “narrow AI” if you prefer.

          Generative AI is just another step in the way. Just like how the emergence of deep learning was one step some years ago. It can clearly produce stuff that previously only humans could make, which in this case is convincing texts and pictures from arbitrary prompts. It’s accurate to call it AI (or weak AI).

          • Orphie Baby@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            5
            ·
            1 year ago

            Yeah, well, “AGI” is not the end result of this generative crap. You’re gonna have to start over with something different one way or another. This simply is not the way.

          • Orphie Baby@lemmy.world
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            12
            ·
            edit-2
            1 year ago

            No? There’s a whole lot more to being human than being able to separate one object from another and identify it, recognize that object, and say “my database says that there should only be two of these in this context”. Current “AI” can’t even do that much-- especially not with art.

            Do you know what “sapience” means, by the way?

    • jeanma@lemmy.ninja
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      5
      ·
      1 year ago

      true, not AI but it’s doing a quite impressive job. Injecting fake money should not be allowed and these companies should generate sales. Especially in disrupting in some human field, even if it is a fad.

      You can compete OK, but you use your own money and benefits to support your cost.

      Yeah I know, something is called “investment”