Reminds me when X-rays first were discovered, everyone claimed their product contained them, including X-ray headache tablets, golf balls, stove polish, razor blades. https://pubs.rsna.org/doi/full/10.1148/rg.242035157 for some fun pictures.
I won't be surprised when AI toothbrushes come out.
The problem might be less with the chemicals in the polish per se and more with the mechanical processes surrounding its manufacture and use. Many otherwise safe substances become explosive when aerosolized.
> Stove Polish Explodes. [by the Associated Press ]
> LOS ANGELES, Feb. 24 Mrs. Sylvia Olds wife of George Olds, of Gardena was probably fatally burned by the explosion of a bottle of stove polish today.
Some of those do seem to be claiming to contain X-rays but others, like the golf balls, just seem to be using it as a brand name. I don't see the problem with that: we don't complain about the lack of snakes in Python. The "calling everything AI" problem seems to be exclusively claiming that your product actually uses AI.
If computing ability were to become as cheap and low-power as human/animal brains, you could have the full capacities of a present day dentist contained in the area of a present day powered tooth. Just make the bristles freely moving, add some hard pieces for plaque and you have a thing that could use that intelligence.
Which is to say, sure, today "AI" is just a marketing term but that's 'cause they don't have it, not because real intelligence isn't something that would be very useful in many, seemingly trivial places.
Yeah, I once had an employer offer a dental plan from a company who sent us all wi-fi capable toothbrushes complete with a mobile app with the promise of discounts in our group rate in exchange for letting the things phone home.
The rebuttal to this view is "provide a principled definition of intelligence". Doesn't seem like the article does this.
A hint appears partway through: "computers will not be able to match humans in their ability to reason abstractly about real-world situations". Does human intelligence distinguish itself by its "abstractness" and by its application to "the real world"? There's also "the systems do not form the kinds of semantic representations and inferences that humans are capable of". Seems like a promising direction for some definition of intelligence.
For my money, we will never consider any machine intelligent as long as we mass produce it. The only way we'll accept machines as intelligent is if, as the singularity theorists say, the machines build themselves. Then we aren't really mass producing them, we're just kicking off a process that we don't totally understand, a bit like gestation.
The rebuttal to this view is "provide a principled definition of intelligence". Doesn't seem like the article does this.
If someone could do that, it would be a significant stride towards producing an intelligent entity.
As far as "AI research" we're somewhat stuck in the position that humans "can't define intelligence but know it when they see it". But if we abandon this intuition based approach, it seems like we wind-up with the argument, "this toaster is intelligent 'cause you can't prove it's not".
> We won't consider machines intelligent until they have and demonstrate the capacity to consider us not-intelligent.
I don't think it needs to go that far. Birds that can solve simple puzzles are obviously intelligent, so are cats who can open doors, etc. I think the human perception if intelligence is hugely based on the notion of, "is this thing self-aware?" or "is there someone in there?", "does it have a mind of its own?"
I would say we'll consider machines intelligent when they're able to have some amount of independence. Right now, they can't solve problems they're not specifically designed/engineered to solve. We don't have any kind of general-purpose artificial intelligence. You might try to argue that reinforcement learning is that, but no, RL doesn't really work outside of a sandbox with an artificial reward function, and it's very sensitive to hyper-parameters.
If you could have a chat bot that can pass the Turing Test with a high degree of success, I think most people would consider that intelligent. Same thing for a robot that can do your laundry, wash your dishes and make some mac and cheese without days of training and without fucking up more than a typical human child would.
We won't consider machines intelligent until they have and demonstrate the capacity to consider us not-intelligent.
I can't see how this claim is even slightly illuminating aside from it's pure "quip-ness". A robot playing, say, the game of go, might spit-out "you are not intelligent" whenever it wins. Is that "the capacity to consider us not-intelligent"?
You could even set-up a micro-world where virtual entities interact, evolve and rate their opponent. It wouldn't be hard for these entities to compete with humans in this arena, to do better than humans and so to give human poor poor evaluations but the entities to be pretty simplistic and far from what humans consider intelligent and far from fit outside the micro-world (you could do it with adaptive game-theory solvers, say).
Products are shipped which claim to use "AI". This is self-contradictory with this definition, and no product can ever exist claiming AI with it. I'm fine with that, but then it seems that the term would be completely useless, when "science-fiction" already exist (and, even if it is not always, can be realistic)
That's not self-contradictory, it's just proof that products which claim to use "AI" can't possibly work. Quite consistent with what's observed in practice.
I understand what you are saying and I tend to agree. But: AI would then be way more specific to computers. Science Fiction is a very broad term also applied to biology, engineering etc.
Well unless you read it as “humans do the hard part” and so a product with AI can make substantial use of computer systems as long as humans are at the helm. The definition is obviously bad because you wouldn’t consider a fire AI because a computer can’t do that. The definition needs an additional stipulation that AI is a computing program.
"Computers have not become intelligent per se, but they have provided capabilities that augment human intelligence, he writes. Moreover, they have excelled at low-level pattern-recognition capabilities that could be performed in principle by humans but at great cost. "
It's odd how under-appreciated this is. There is no "intelligence" here as we know it, not even a hint. There are algorithms operating on data for very prescribed use cases. When you break it down its pretty primitive stuff really. Clever sure, but don't think we're building a mind here, that language just confuses things. So much of the public conversation around this stuff is polluted with this sort of sci fi nonsense claiming all sorts of properties for AI and implying some sort of singularity is around the corner. Pure fantasy.
> There is no "intelligence" here as we know it, not even a hint. There are algorithms operating on data for very prescribed use cases.
I agree with your general point, but if you break it down, our brains a nothing "intelligent" either. It's just a network of neurons searching for patterns in the sensor input. Sure, it can reconfigure itself, but from a really abstract perspective it's nothing more than a ultra-scaled neural network as we currently use them. So this really boils down to the definition of intelligence, which is by no means easy.
Maybe, maybe not. I'm not sure if it's just a matter of scale or of the fundamental nature of the thing. It might be that the computer / neural network metaphor, whilst isomorphic to certain brain procesees, may ultimately not tell us very much about the larger system and what it actually is.
I tend to think that because computers are the current peak of civilisation / engineering, and in some ways, "alive", we think we might be like a computer. In the Victorian machine age they talked of humans as machines. Maybe some, hitherto unthinkable invention is around the corner, and we will say humans / brains are like that too.
No, the difference is we can understand and apply knowledge in different contexts. I'm not saying machines will never mimic that (GPT3 is getting kinda close at times), but as my good friend Dewayne Perry (co-author of the most cited paper in SW engineering and Motorola Regent Chair at UTexas) says, "Artificial intelligence has a long way to go just to catch up to natural stupidity!" Try getting Alexa (famously trained daily by millions) to do anything in anything other than the predefined prescribed way. I'd wait, but I've got a life to live...
It does not matter for my point. Even if our NNs miss some part of neuron function, it would look comparable to a simple if/a multiplication on the smallest level. My point is that the complexity/intelligence comes from size; just because our algorithms adapt minimally or accept pre-defined scenarios, does not mean that they are fundamentally unintelligent because we're able to fully understand them.
You can blame a lack of metaphysical sophistication for this and a lack of self-awareness around how "intelligence" is being projected or read into things.
One obvious feature of intelligence is that it can abstract from particulars. We form and reason about universal concepts. I know the concept of "squarishness" which means something definite without being some specific square (of which there is an infinite number). I can imagine a square, but those are always particular squares, whereas the concept is universal. But material things are only ever particular. Can you show me a material thing that is "squarishness"? No. Computers are physical objects, right? So how can the universal "squarishness" exist in a physical object without being a particular square?
Even here we often fool our selves when we fails to recognize that computer programs, like the text in a book, need humans to grant them their identity and make sense of them because their meaning and identity are assigned by the observer and not intrinsic to them like mind-independent things. A "Square" class, defined as record with two fields corresponding to width and length, is not the concept of a square. Instances of that class are not squares. Even "class" and "instance" are mental models which do not exist "out there" like real entities or phenomena. What we call computers just have affordances that allow us to configure them to correspond and cooperate with the models in our minds. But strictly speaking, computers don't really have an objective reality and neither does the computation within them. The machine is real in the sense that we have arranged a bunch of stuff to be a certain causal ensemble, but this causal ensemble is not objectively a computer nor is it computing. Rather, we are using it to compute. But if you erased all human beings from the planet, and left a bunch of computers running, objectively, they would only be arrays of things with magnetic and electrical components changing physical state. You need someone to interpret what computers do to make them useful. Just like three beads on an abacus doesn't mean "3" and pushing a bead toward those three doesn't mean "adding 1". You need a person to assign those meanings to it. Computers don't magically break free of that reality.
I tried convincing the business developers and marketing for years that what we did was machine learning, applied statistics or gulp “data-driven” functionality - but no, only thing that worked was “<Company> AI Engine”... hurray for throwing my PhD at the wind...
The alternative is to go full buzzword - we, for example, host our code on the blockchain [0] - and get technical in some lower layers of marketing or documentation, so that people who actually know what they're looking for can find out.
[0] git, that is. Yes, git is technically a blockchain.
What something is has almost no relationship to what sales could be the most successful calling it. That's a consequence of the lamentable but maybe unavoidable fact that so rarely are customers experts in what they buy.
Unfortunately marketing is everything. Engineers like to build nice shiny things without much of an eye on how to sell it. If you can slap "AI" on it and make some $, then why not? The end customer may not really care. Its not exactly like calling an apple "organic". Theres no certification process or consensus on what AI actually is or means.
We, the people doing the actual work, have a moral responsibility to resist our work being dishonestly marketed. And certainly a moral responsibility not to engage in it ourselves.
AI isn't really a thing per se and it adds a kind of patina of false mystique to computation that is essentially no different than any other computation (which itself is also not really "a thing" out there in the world as a phenomenon). The only way AI really means anything is because the person in question has decided to frame something as AI. He is choosing to append to something meaning that it itself does not possess. Take a the linear regression type stuff lots of AI today uses. You put that in another context and it ceases to be AI. Why? Because it's not AI per se! If it were a real thing or real phenomenon, it would be mind-independent and the mental context wouldn't determine its identity.
Basically, it seems that whenever someone mechanizes something that previous only a thinking person could do, then its viewed as AI.
When machine learning was a buzz word then companies started using it to describe almost everything they do. Recently I have also noticed that more companies (and PMs) are using AI in its place, at least in their marketing speak.
Often the AI or machine learning that is being sold to their customers (or if it’s a startup, to their investors) is in fact a team or a group of teams creating static rules. If it is image recognition the they will often have a large team of manual reviewers. Yes there maybe some AI or machine learning models that are assisting in the decision making, but they are usually much less effective than people realize.
Today whenever I hear AI or machine learning, my default is to assume it is marketing speak.
I don’t doubt there are models, I just doubt their effectiveness. But saying “we use AI to do X”, sounds much better than saying, “we have a team of experts who help us do X really well”
With that said I have worked and continue to work with some amazing data scientist who really are pushing the limits of what can be done with machine learning.
I interviewed with one of these startup with these .ai domains. I was super impressed that somehow they could decode human speech over a phone into orders. Until I asked them about that and they had a call center farm that does all that. And a "prototype" of an AI that doesn't work at all lol
Sounds like they're building an annotated dataset, whilst at the same time claiming the space and testing market fit / appetance.
Sure, the risk is that they can't make it work in the medium term. But without knowing much about the complexity of the orders / domain, that doesn't sound like a huge risk, given the current SoTA
While I mostly agree with Michael Jordan, I think that he overstates his argument a small bit. I feel comfortable applying the "AI" label to projects if they: 1) achieve human level of performance or better. 2) They use any or all of standard techniques like deep learning, NLP, knowledge representation, reinforcement learning, etc.
I am just about to hit my 40 year anniversary for getting paid to be doing "AI" related work, with a lenient definition of AI.
On the other hand, Artificial General Intelligence is a much stronger term, and is something that I don't expect to see in my lifetime.
Serious question, are calculators AI because they perform arithmetic much better than humans and use standard techniques like Taylor series expansion and numerical methods? I feel like you want to add that the technique has to use data as a core part of its algorithm that represents “learning”. But then we’re just back to machine learning that beats humans at a specific task. Is there a need for this term? We already have “superhuman algorithm”. AI has more syllables and we could also use SA which could double as State of the Art.
> 2) They use any or all of standard techniques like deep learning, NLP, knowledge representation, reinforcement learning, etc.
But why that? There isn't a principled reason for singling those out. It's just a matter of convention that we call those AI.
Pretend you never heard of AI before and you came across these algorithms. No amount of analysis or poking around would lead you to think "oh, man, this is artificial intelligence!". No, you'd see a bunch of statistical techniques that do X, Y, and Z, none of which would jump out at you as "wow, this is AI". All that computers give us is the ability to run tedious computations over larger data sets that would be impractical for human beings to perform.
"AI" is what we read into these things. There really is no such thing as such.
What would a principled reason for association look like beyond mere convention? Language is used by different groups to mean different things. Machine learning, logic, control, robotics, linguistics, and cognitive science were publishing in artificial intelligence venues decades ago. Now AI seems to just mean DL/RL.
I appreciate this notion. It seems as though AI has become a fancy marketing term for stats. I also think there are times when knowing which types of models are used would be really helpful. Having built models with neural nets and glms on the same data sets, there are pros and cons to a variety of approaches and oftentimes the simple ones win, even after a 6 month detailed analysis of all the predictors.
AI these days can be seen as nothing more than a bunch of code used to offload human manual labor to machines, and nothing more. Strong AI[0] is the real sought after gem we all want, but also the one that could cause problems (since it could 'run away' from the inventors and end up controlling humanity in some form).
I imagine if we really wanted to build our final invention[1] then we would have to be under some existential pressure to do so. In other words: we would build a 'run away' AI if it could potentially save humanity. Also worth reading this: https://en.wikipedia.org/wiki/Ethics_of_artificial_intellige...
Sadly our marketing tests show that when we say our product uses ML we get far less engagement than when we say AI. I don't know that net-conversion is better with AI but ML sure doesn't capture people's imagination. Sigh.
Btw, reminds me of the old joke that goes something like this:
AI for marketing, ML for recruiting, Regression for design, multiplication for implementation.
I think IEEE has had it right for several decades now - Computational Intelligence. I’ll never forget when my undergrad research adviser in Artificial Neural Networks and Self Organising Maps (circa 2003) corrected me - “we call it computational intelligence as artificial intelligence is science fiction”.
Having grown up with 8 and 16 bit games systems and the term AI being used to describe the computer opponents (eg in beat em ups) I’ve long since learned not to take the term AI literally.
The problem is the term never had a technical definition. It was always just a hand waving marketing phrase for “clever algorithms”.
There's a rule of thumb literally for years. If it's Python - we are seeng a ML. If it's PowerPoint - ladies and gentlemen, let me present to you an AI.
The last time I was at a start up (~2 years ago), it was a running joke that the easiest way to raise money was to say your product had something to do with AI. We saw so many companies that were branding everything as AI simply to raise a round.
One of the funniest cases was a start up that claimed to be using AI to process their customers' requests but was actually just farming out the work to contractors who were doing everything by hand.
> “People are getting confused about the meaning of AI in discussions of technology trends—that there is some kind of intelligent thought in computers that is responsible for the progress and which is competing with humans,” he says. “We don’t have that, but people are talking as if we do.”
How much more am I as a human really than the machine learning algorithms are?
A lot? A machine learning algorithm is a task-specific pattern matching algorithm. Sea cucumbers are probably more "intelligent" than any ML algorithm.
Most are pre trained and intentionally left incapable of learning on the go. One probably would count that as a rather major issue when encountered in a human.
Fundamentally different I would say. I understand enough about the brain to say that talking about any kind of machine intelligence in relation to brains, is comparing apples and oranges. Its not computing things in any way analogous to how a computer works. Why would it be?
Of course, computers might be able to imitate some things and far exceed some things that our brains do. That's different though.
I get that you comment is tongue in cheek, but your brain doesn’t have a clock signal. Easily verifiable and we could easily detect it if it did. Thus, computers are fundamentally different than brains. Also, computers have nearly perfect recall and perform arithmetic much better than humans. So both statements of GP are true.
Hey, would somebody mind breaking down the different not-necessarily-AI things? Like is machine learning the same as neural nets or a subset? This would be tremendously useful! I would assume "expert systems" falls somewhere at the bottom of the list, right above "good old vanilla programming". Dunno how linear regression fits in.
Some of the boundaries here are blurry and subjective, making it nontrivial to produce neat categories.
Personally, I go to the Wikipedia page for any particular term and follow the crowd wisdom, plus a grain of salt.
¯\_(ツ)_/¯
Actually, you raise a genuine issue. Terminology isn't clear, and that makes discussion difficult.
I believe that the notion of AI was predictive, and emergent, and retro-fitted. It emerged around the 1950s (arguably), gained some degree of formalisation, meanwhile the goalposts shifted and the field diversified, and now the term is applied haphazardly. I think it's now the responsibility of the user of the phrase to define their terms... which probably indicates that it's no longer fit for purpose.
I think it’s producing plenty, but it will never live up to the hype of singularity-enabling Asimov-style superior super-intelligences that are being promised regularly to be just around the corner.
I think there's some "missing link" here, I think we'll have human brains merging with computers to create a symbiotic AI relationship that's less artificial but more "enhanced" for already existing "intelligence", this enhancing is probably the equivalent of the singularity, or pre-cursor to it, as we are able to research faster and faster with our computer-aided brains, we might get to AGI easier/quicker.
We're at least 30 years away from that though, if not 100. Immortality tech, may be easier to accomplish.
If a rule-based system can get as good of results as a deep neural net, why is the deep neural net "AI" but the rule-based system is "dumb and hard-coded"?
AI is not a precise term. If you can make a product that feels intelligent to the user, why does the implementation matter?
I agree with the article. I think calling machine learning as it is today AI is just plain wrong, it started as pure marketing about 5 years ago & now it's almost like people are believing it, even in the industry.
But I heard the florist across the street telling me they are just now getting into the 'deep learning AI' (what they said) just to pick the right flowers for my mother for mothers day several weeks ago. Since they are 'going to use' deep learning soon, they must be turning into an AI company. /s
Every time I see a new company preaching endlessly about 'AI' to address a non issue at this point is straight-up begging to VCs to fund their $0 revenue so-called 'tech startup'.
Disable Javascript. I use NoScript on Firefox. For IEEE, I had to (temporarily) whitelist the main site to get the article sans all the crap. I find this is pretty common when browsing with Javascript mostly disabled.
Depends. If the pop up is something along the lines of "we only have functional cookies", that's okay anyway. If they try to force advertising cookies I'll look for an alternative site (i.e. for coding help) or skip the article. Same if the popup is designed in way that skipping is more work than the content is worth.
Of course, sometimes you direly need some info and can't skip it, but these situations are extremely rare.
IEEE collects the following personal data in line with the use purposes explained in a subsequent section:
- Your name and contact details
- Date of birth
- Online profile data/usage
- Emergency contact information
- Social media profile information
- Copies of identification documents
- Education and professional information
- Communication information including IEEE Online Support and Contact Center communications
- Purchasing and payment information
- Registration and participation in IEEE events and activities
- Subscription preferences
- Information about the device(s) you use
- Information about service usage
- Cookies
- Authentication data
- Location information
- Author and peer review information
- Other information you upload or provide to us
How do we use your information?
To engage with third parties. IEEE may share your personal data with third parties in connection with services that these individuals or entities perform for or with IEEE.
In this case no, as I did not need to click "accept" in order to read articles. With an intrusive pop up, I most likely would have. Overall, though, I'd probably need to make an exception for IEEE anyway as it hosts a lot of papers for my field of study.
To be fair, though, I'm not a hardcore data protection activists. It might be possible to start a complaint or check whether the website does tracking without clicking "I accept" (and I'm pretty sure most do), but unfortunately my life is only so long and the effect is probably rather small. I highly dislike that this tactic works for them, but I'm only willing to sacrifice so much. Also, I still run no script and ad blockers, so the effective tracking is hopefully limited anyway.
I typically use "reader mode" in whatever browser I'm on. It typically shows the article nicely even when either a cookie consent modal, or an ad would otherwise be blocking the content.
Whew boy, this guy reminds me of Frank Grimes: mad that he can't control the world. I don't really think it matters, "computers" originally meant the women computed ballistics tables for the military in the early 40's... language changes, one person can't change that.
What is considered "AI" changes over time. At any point in history, if a machine appears to do a task that only a human brain used to be able to do, then that's considered "AI".
So in the mid 1900s a calculator that is able to do arithmetic was considered "AI", and in the late 1900s chess playing machines were considered "AI". Today those things are not considered "AI".
What is considered "AI" today will not be considered "AI" a decade or two from now.
>So in the mid 1900s a calculator that is able to do arithmetic was considered "AI"
Is this true? The citation you link doesn't seem to clearly state that.
I see the linked page says:
>A simple electronic calculator performs calculations much faster than the human brain, and almost never makes a mistake.[4] Is a calculator intelligent?
But I don't see where it says anyone in the mid 1900s thought a calculator was AI.
I am no historian, but I can think of numerous examples of AI in science fiction throughout the 1900s and perhaps earlier. Metropolis, Rossum's Universal Robots, 2001: A Space Odyssey. Heck, one could probably even draw the line all the way back to Frankenstein.
Maybe the lower bound for what gets called AI to sell stuff has changed over time, but it seems like there's been a pretty clear and consistent (if poorly defined) goal of human-like intelligence.
Artificial intelligence has been around for ages, with it even being used in the Greek mythology to express human like machines which copied man’s behaviour. At the beginning, even early computers were considered logical machines as they were able to reproduce “intelligent” capabilities such a arithmetics and memory… Engineers of that era saw it as an attempt to create mechanical brains.
Nowadays, something as “trivial” as a calculator function, does not go hand in hand with our current concept of Artificial Intelligence.
I won't be surprised when AI toothbrushes come out.