One of the older philosophical problems of AI has been that the field has been defined by problems that stop being "AI problems" as soon as a solution is discovered. So, if those problems weren't AI problems after all, then what exactly is AI?
One solution to this conundrum is to simply say that AI research is just that, a research process. There are no inherently AI problems and AI research is just a process by which we uncover solutions to previously intractable problems.
Urusan
in reply to Urusan • • •I have an alternative idea: there are inherently AI problems.
Informally, I would define AI problems as problems with an open-ended complexity to testing their correctness.
In practice, this means that the number of conditions that you need to test to ensure your system is 100% correct is too high to be practical.
Note that this space of problems is different from computationally hard problems. Most computationally hard problems have simple, brute-force solutions.
Urusan
in reply to Urusan • • •It's also different from simple things like "testing that integer addition works".
To test addition, you could just test every possible pair of integers representable by your computer. For 64 bits, this would require 2^64*2^64 tests, which is way too many to be practical.
However, given a specific bit size, we can instead test a normal case and the boundary cases, which means the problem is much simpler than it might first appear.
Urusan
in reply to Urusan • • •However, it's possible to come up with such a binary function with 2 64-bit inputs and 1 64-bit output that has no regular pattern and thus can't be tested except by brute force checking of every possible combination.
This isn't necessarily an AI problem either, since there's no pattern available to exploit. However, there's a whole spectrum of problems from the extreme no-pattern case all the way down to the regular case. AI problems live in between these two extremes.
Urusan
in reply to Urusan • • •If you're attacking one of these problems, you may find large-scale patterns that easily provide a large boost to accuracy.
However, for individual cases you will get it wrong, and since these are arbitrary functions you might get it horribly wrong if there's substantial outliers in the function.
In practice, you can always add more refinement to your solution (and testing) with these AI problems. They're so complex that you'll never pin it down 100% in practice.
Urusan
in reply to Urusan • • •Another important note is that most real world AI problems are ever-changing as well.
As an example, natural language translation between two languages doesn't have a "final form" that you can converge on and call it done. Language is always changing over time so you have to keep it up to date as new words, expressions, and ideas emerge.
You could theoretically generate the perfect translation algorithm at some specific moment, but over time it would get worse and worse.
Urusan
in reply to Urusan • • •Of course, if you try to take this idea and formalize it, I'm also pretty sure it'd be tricky to pin down as well.
Some problems near the bottom of the complexity spectrum might yield a solution if we put a lot of resources into it and stop being an AI problem one day. This is theoretically possible for any problem, but also it's completely impractical.
On the flip side of the coin, when does it stop being an AI problem and start being functions of random noise?
Urusan
in reply to Urusan • • •Another important thing here is that, while defining which problems are AI problems is hard...most problems (on real, finite hardware) are AI problems.
We've been focused on the simple problems that we can solve with limited tools, so we're biased towards thinking that most problems are simple. However, most possible functions (in the realistic domain) are not simple regular functions or totally pattern-less noise.
Event if you're given noise, there's likely latent patterns
Urusan
in reply to Urusan • • •This perspective is why I'm skeptical of ideas that we'll be able to "understand what AI systems are doing" and through this understanding be able to render them safe and/or predictable.
Sure, for some problems this isn't an issue, we can already systematize and practically solve these problems
These aren't the interesting AI problems though, they're just applying AI to traditional problems so the AI can solve it with raw compute rather than solving via human thought
Urusan
in reply to Urusan • • •For these real AI problems though, there is no general understanding to be had.
Sure, the AI (or human) can come up with a post-hoc rationalization for certain decisions. We can also broadly understand the larger patterns that the system came up with, but understanding the full details is inherently intractable.
Urusan
in reply to Urusan • • •This isn't to say that understanding these systems isn't good/important work. I'm just saying that it's no magic bullet.
There's no solution here, so we should set our expectations appropriately.
dorotaC
in reply to Urusan • • •I think this definition is not very useful. The problem of finding prime numbers falls square into this definition. Is it an AI problem? I don't think so, the common usage of the term doesn't support it, and there's nothing to be gained by calling it one.
Something's missing before I can start using your definition.
Urusan
in reply to dorotaC • • •@dcz There's a simple algorithm to compute primality (as well as enumerating the primes).
Your testing suite can be conceptually very simple, since you have a simple mathematical definition. Just compare the output of the algorithm under test against the answer provided by trying all valid divisions.
My point with contrasting these AI problems to computationally hard problems is that the computational intractability doesn't matter, just the conceptual intractability.
dorotaC
in reply to Urusan • • •Enumerating primes is conceptually intractable as well. There's the useless brute force method, and there are tricks like 2^n + 1.
If AI is about conceptual intractability, then the entire field of maths becomes an AI problem. Is that a helpful categorization? What insight does it bring?
Urusan
in reply to dorotaC • • •@dcz Almost all math problems that we think about are conceptually very simple in comparison to what's possible. You don't usually find mathematicians tackling problems involving complicated composite functions that take multiple pages just to fully define. There's enough interesting stuff going on in the simple mathematical spaces to explore for a lifetime.
My point is that this whole poorly explored universe of far more complicated problems exists.
Urusan
in reply to Urusan • • •@dcz The main insight is that they won't be the same simple problems...the correct solution won't be simple.
We already know of many simple problems which are unsolvable, either theoretically or in practice.
However, aside from problems we've definitely proven to be unsolvable, these simple problems are at least in principle solvable. They're simple enough to have simple, understandable answers.
If the answer is a trillion page book, you aren't ever going to understand it.
dorotaC
in reply to Urusan • • •Urusan
in reply to dorotaC • • •@dcz I mean, math does sometimes tackle these AI problems. It's all in the mathematical domain.
AI is also making its way into math more and more. There's more and more overlap.
The reason I'm hesitant to say "most" math problems are AI problems is that once you leave real life computation problems and enter the mathematical arena fully, infinity gets into the picture and that changes everything.
dorotaC
in reply to Urusan • • •Urusan
in reply to dorotaC • • •@dcz Well, for one, ideas about conceptual (or computational) intractability stop making sense. You always have infinite resources in infinite math-space.
Undecidable problems have a different, more paradoxical flavor. Infinity just generally makes things act in weird and unintuitive ways.
Most math problems are infinite, and meta-characterization of them is extremely difficult to get correct.
Urusan
in reply to Urusan • • •dorotaC
in reply to Urusan • • •I don't believe I've ever witnessed infinite resources necessary to solve a practical mathematical problem. Best that can happen is meta-reasoning: thinking about solving problems, not solving them.
Thinking about AI solving problems with infinities does change things, but doesn't have any effect on AI that we'll be see in real life.
Urusan
in reply to dorotaC • • •@dcz I agree, which is why I'm not pursuing this idea about the intersection between pure mathematics and (A)I any further.
Not to mention that I don't have the theoretical background to do it justice.
dorotaC
in reply to Urusan • • •> You don't usually find mathematicians tackling problems involving complicated composite functions that take multiple pages just to fully define.
Sure you do. But they are abstracted away. Just take a look at this:
2.5(6)
To get this expression, you have to define naturals
en.wikipedia.org/wiki/Church_e…
, define integers, and then rationals, and then reals. Writing it out in terms of basics would take a page. And it's just basic maths. Not simple. Sounds like AI to me.
representation of the natural numbers as higher-order functions
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)Urusan
in reply to dorotaC • • •@dcz Abstraction is a tool to reveal the underlying simplicity of the problem, not to make inherently complex problems simple.
Sure, one could always deploy an abstraction that encapsulates the whole problem, but to truly understand the problem, one must understand all the dependent abstractions.
A good abstraction reduces overall complexity, even if it is itself complex.
Urusan
in reply to Urusan • • •@dcz It's true that the problems we often investigate in mathematics aren't really nearly as simple as they seem thanks to the depth of the requisite abstractions, but even if you require rigorous definitions and proof, then we're still only talking about several thousand pages (such as in the Principa Mathematica) plus whatever the definition of the problem is.
Most math problems are like a one page addition to the prerequisites.
These AI problems are much larger.
dorotaC
in reply to Urusan • • •I don't know where your boundary lies... There's plenty of problems where humans can't verify answers themselves due to them taking thousands and thousands of pages, so they employ computers to do that. Does the proof of the coloring theorem fit the AI definition?
en.wikipedia.org/wiki/Four_col…
I don't know, but I think this definition of AI is not that great, and definitely not the popular one now.
statement in mathematics
Contributors to Wikimedia projects (Wikimedia Foundation, Inc.)Urusan
in reply to dorotaC • • •@dcz I'm not really talking about proofs though. I'm talking about problems.
I guess you can frame a specific proof as a problem...
In which case, yes, it took an Intelligent system to solve the proof of the 4 color theorem. It has a massive answer and even then it requires additional computational work, rather than directly understanding the solution
That said, I'd consider it a borderline case. The resulting proof is shorter than the prerequisites and is quite realizable
Urusan
in reply to Urusan • • •@dcz Also, the border here is relative. It's on one place for humans and another for mice (who can't manage the prerequisites for mathematics) and in still another place for a superintelligence.
Though some problems are beyond the full understanding of any intelligence in our universe, but could still be practically tackled by pretty modest intelligent systems.
dorotaC
in reply to Urusan • • •Urusan
in reply to Urusan • • •@dcz Contrast this with a likely AI problem: translate English to Japanese with a limit of 500 bytes of UTF-8 text.
Imagine trying to define a complete test suite to confirm your algorithm is working correctly.
Much like the simple problems, you'll use too many compute resources to be tractable, but you'll also use an overwhelming amount of resources to define your test.
That's what makes it an AI problem.
Urusan
in reply to Urusan • • •@dcz You could definitely enumerate every possible valid string and include their translation, which would be off the charts enormous.
A far more compact test suite is possible, you don't actually need to enumerate every possible string.
However, even in the best case the amount of material you would need to define this test suite would be far more than everything ever written and read throughout all of human history and likely larger than what's available in the universe.
dorotaC
in reply to Urusan • • •I think you're hitting some sort of a paradox about underdefined problems.
If you have a problem, aren't you defining a problem in terms of what properties its solution has? And if you don't know which properties you're looking for, can you even explain the problem to the AI, for it to solve the problem?
When you tell the AI "please translate to Japanese", you are not giving it a well-defined task, so it's no wonder that you can't verify an answer.
Urusan
in reply to dorotaC • • •@dcz This sort of touches on underdefinition issues, they're common in these AI problems in practice, but they aren't an inherent part of them either.
I can make up an arbitrary, finite function (ex. taking 2 64-bit integers and outputting 1 64-bit integer) and provide an output for every possible set of inputs in a gigantic 2^64*2^64 table. This is a well defined function.
However, if the patterns in the outputs are sufficiently complicated then the explanation may be huge
Urusan
in reply to Urusan • • •@dcz A real world problem like language translation can be even harder due to ambiguity and underdefinition. We might not be able to even theoretically derive this kind of "oracle function" if some string has multiple valid outputs or if some string doesn't have a definite output
This detail doesn't change the basic premise though, some problems have answers that are too complicated to fully understand, so we have to rely on AI (or plain old I) to provide practical solutions
dorotaC
in reply to Urusan • • •Urusan
in reply to dorotaC • • •dorotaC
in reply to Urusan • • •I don't think we can ask this question precisely. We'd have to include in the question the laws of the universe, and we don't know all of them.
We can approximate the question by supplying it a simpler model, like we already do to solve protein folding. But then is there a reason we wouldn't be able to verify the answers according to the model?
Urusan
in reply to dorotaC • • •@dcz For the model-driven form of the problem, sure, you can use the model to test itself. That's not particularly valuable though.
The difference between this case and the prime number example is that in the prime numbers are literally defined in mathematical/computational terms.
In this case, we have a real world problem we would like to solve, so matching up closely with reality is what we're after.
Even if we don't know the answer to write a test, that doesn't matter.
Urusan
in reply to Urusan • • •@dcz The point is that there's an answer out there.
I'm not sure about the details of how underdefined or otherwise messy it would be. I know some amino acid chains have multiple foldings, but it could likely still be well defined by outputting all the possible foldings (perhaps with a probability or some explanation of how alternative foldings occur).
Regardless though, the answer is out there and it's finite. Even a close approximation would be incredibly valuable.
dorotaC
in reply to Urusan • • •We not only know the answer, we also don't know the question. We don't know what rules govern thethe real world, and there is no reason to believe that AI will. So we can't ask questions about the real world if we want precise answers. Even the strictest standards of physical experiments are probabilistic.
That means AI under the formulation "we can't verify its answer to a well-formed question" can't be recognized by giving it a physics problem.
Urusan
in reply to dorotaC • • •@dcz I never said AI problems have to have well defined questions or answers, I just said that AI problems that have both well defined questions and answers do exist.
Most of them aren't very interesting. The really interesting problems are the poorly defined ones in the real world.
However, they give us a way of thinking about the poorly defined ones without getting caught up in the poor definition.
dorotaC
in reply to Urusan • • •I focus on ill-defined questions because it's very easy to provide an unverifiable answer to those, but it won't be useful to call that AI :P
And I think practical AI is (so far) precisely useful for ill-defined questions, but then I cannot rely on any expectations about the correctness of the answer.
Urusan
in reply to dorotaC • • •@dcz That's the point though...we use AI to solve problems we can't solve through normal means. It's an inherent part of the problem-space for us to not fully understand it, regardless of the reason why (well defined but too complex or poorly defined or both).
Recent work on AI is producing accurate (and inaccurate) answers we don't understand, and that's not going to go away because these inherent AI problems are a perfect fit for AI techniques.
Urusan
in reply to dorotaC • • •@dcz By the way, thinking about your train of thought here some more, I'm realizing that the formal definition of the question will necessarily be large if the method to check the answer (test suite) is large.
I think one can always just frame the way to check the answer as the formal definition of the question.
Still, this doesn't change the fact that plenty of interesting real life problems have unknown formal definitions and hugely complex answers.
dorotaC
in reply to Urusan • • •Can you convince me that there aren't any problems in between the extremes which involve 2 64-bit inputs and 1 64-bit output?
And if there are, do we gain anything interesting by calling them AI?
dorotaC
in reply to Urusan • • •Nihl L'Amas
in reply to Urusan • • •silverwizard
in reply to Urusan • •