Rocksolid Light

Welcome to RetroBBS

mail  files  register  newsreader  groups  login

Message-ID:  

"A mind is a terrible thing to have leaking out your ears." -- The League of Sadistic Telepaths


computers / comp.ai.philosophy / Does the halting problem actually limit what computers can do?

SubjectAuthor
* Does the halting problem actually limit what computers can do?olcott
+- Re: Does the halting problem actually limit what computers can do?Richard Damon
+- Re: Does the halting problem actually limit what computers can do?Jim Burns
+* Re: Does the halting problem actually limit what computers can do?olcott
|`- Re: Does the halting problem actually limit what computers can do?Richard Damon
+- Re: Does the halting problem actually limit what computers can do?Richard Damon
+* Re: Does the halting problem actually limit what computers can do?olcott
|+- Re: Does the halting problem actually limit what computers can do?Richard Damon
|`* Re: Does the halting problem actually limit what computers can do?olcott
| `- Re: Does the halting problem actually limit what computers can do?Richard Damon
+* Re: Does the halting problem actually limit what computers can do?olcott
|`- Re: Does the halting problem actually limit what computers can do?Richard Damon
+* Re: Does the halting problem actually limit what computers can do?olcott
|`- Re: Does the halting problem actually limit what computers can do?Richard Damon
+* Re: Does the halting problem actually limit what computers can do?olcott
|+- Re: Does the halting problem actually limit what computers can do?Richard Damon
|`* Re: Does the halting problem actually limit what computers can do?olcott
| +- Re: Does the halting problem actually limit what computers can do?Richard Damon
| `* Re: Does the halting problem actually limit what computers can do?olcott
|  +- Re: Does the halting problem actually limit what computers can do?Richard Damon
|  `* Re: Does the halting problem actually limit what computers can do?olcott
|   +- Re: Does the halting problem actually limit what computers can do?Richard Damon
|   `* Re: Does the halting problem actually limit what computers can do?olcott
|    +- Re: Does the halting problem actually limit what computers can do?Richard Damon
|    `* Re: Does the halting problem actually limit what computers can do?olcott
|     +* Re: Does the halting problem actually limit what computers can do?Richard Damon
|     |`- Re: Does the halting problem actually limit what computers can do?Richard Damon
|     `* Re: Does the halting problem actually limit what computers can do?olcott
|      +- Re: Does the halting problem actually limit what computers can do?Richard Damon
|      `* Re: Does the halting problem actually limit what computers can do?olcott
|       +- Re: Does the halting problem actually limit what computers can do?Richard Damon
|       `* Re: Does the halting problem actually limit what computers can do?olcott
|        +- Re: Does the halting problem actually limit what computers can do?Richard Damon
|        +* Re: Does the halting problem actually limit what computers can do?olcott
|        |`- Re: Does the halting problem actually limit what computers can do?Richard Damon
|        +* Re: Does the halting problem actually limit what computers can do?olcott
|        |`- Re: Does the halting problem actually limit what computers can do?Don Stockbauer
|        +* Re: Does the halting problem actually limit what computers can do?olcott
|        |`- Re: Does the halting problem actually limit what computers can do?Richard Damon
|        `* Re: Does the halting problem actually limit what computers can do?olcott
|         `- Re: Does the halting problem actually limit what computers can do?Richard Damon
`* Re: Does the halting problem actually limit what computers can do?olcott
 +- Re: Does the halting problem actually limit what computers can do?Richard Damon
 `* Re: Does the halting problem actually limit what computers can do?olcott
  +- Re: Does the halting problem actually limit what computers can do?Richard Damon
  `* Re: Does the halting problem actually limit what computers can do?olcott
   +- Re: Does the halting problem actually limit what computers can do?Richard Damon
   `* Re: Does the halting problem actually limit what computers can do?olcott
    +- Re: Does the halting problem actually limit what computers can do?Richard Damon
    `* Re: Does the halting problem actually limit what computers can do?olcott
     +- Re: Does the halting problem actually limit what computers can do?Richard Damon
     `* Re: Does the halting problem actually limit what computers can do?olcott
      +- Re: Does the halting problem actually limit what computers can do?Richard Damon
      `* Re: Does the halting problem actually limit what computers can do?olcott
       +- Re: Does the halting problem actually limit what computers can do?Richard Damon
       `* Re: Does the halting problem actually limit what computers can do?olcott
        +- Re: Does the halting problem actually limit what computers can do?Richard Damon
        `* Re: Does the halting problem actually limit what computers can do?olcott
         +- Re: Does the halting problem actually limit what computers can do?Richard Damon
         `* Re: Does the halting problem actually limit what computers can do?olcott
          `* Re: Does the halting problem actually limit what computers can do?Richard Damon
           `- Re: Does the halting problem actually limit what computers can do?Don Stockbauer

Pages:123
Does the halting problem actually limit what computers can do?

<uhm4r5$7n5$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11959&group=comp.ai.philosophy#11959

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 12:30:11 -0500
Organization: A noiseless patient Spider
Lines: 36
Message-ID: <uhm4r5$7n5$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 17:30:13 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="1b074cc8a65e3858c9d9d079519a8a78";
logging-data="7909"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18Bba4H0DJ4DGs15G347fPz"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:QkEfdQJCOeiCvBNNzs2QjBOSyUY=
Content-Language: en-US
 by: olcott - Sun, 29 Oct 2023 17:30 UTC

*Everyone agrees that this is impossible*
No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.

H(D) is functional notation that specifies the return value from H(D)
Correct(H(D)==false) means that H(D) is correct that D does not halt
Correct(H(D)==true) means that H(D) is correct that D does halt

For all H ∈ TM there exists input D such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

*No one pays attention to what this impossibility means*
The halting problem is defined as an unsatisfiable specification thus
isomorphic to a question that has been defined to have no correct
answer.

What time is it (yes or no)?
has no correct answer because there is something wrong with the
question. In this case we know to blame the question and not the one
answering it.

When we understand that there are some inputs to every TM H that
contradict both Boolean return values that H could return then the
question: Does your input halt? is essentially a self-contradictory
(thus incorrect) question in these cases.

The inability to correctly answer an incorrect question places no actual
limit on anyone or anything.

This insight opens up an alternative treatment of these pathological
inputs the same way that ZFC handled Russell's Paradox.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhm7b1$36q2u$1@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11960&group=comp.ai.philosophy#11960

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 11:12:49 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhm7b1$36q2u$1@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 18:12:49 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3369054"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhm4r5$7n5$1@dont-email.me>
 by: Richard Damon - Sun, 29 Oct 2023 18:12 UTC

On 10/29/23 10:30 AM, olcott wrote:
> *Everyone agrees that this is impossible*
> No computer program H can correctly predict what another computer
> program D will do when D has been programmed to do the opposite of
> whatever H says.
>

Good that you admit that.

> H(D) is functional notation that specifies the return value from H(D)
> Correct(H(D)==false) means that H(D) is correct that D does not halt
> Correct(H(D)==true) means that H(D) is correct that D does halt

Except that it should be H(D,D), since you need to give H the input that
D needs to be given.

So, your "Correct" function is false since H(D,D) will, as you just
agreed, never return the right answer for the D designed for it.

Note also, the FUNCTION Correct must return the value false if the H as
its input doesn't return a value in a finite number of steps, as that
makes H not actually a decider, so it is not a "correct decider".

>
> For all H ∈ TM there exists input D such that
> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

Nope, try to give the case. You are just LYING here and showing your
ignorance.

Remember, each H above is a SPECIFIC Turing machine (and for each H
there will be a SPECIFC D, based on that SPECIFIC H, for which that
SPECIFIC H will get the answer wrong.

Remember, for EVERY actual SPECIFIC Turing Machine D (with input x) D(x)
will either Halt or Not.

For every actual SPECIFIC Turing Machine H, it will either give the
correct answer, so Correct will answer True, of H will either not answer
or give an incorrect answer, so Correct will answer False.

There is no case for a SPECIFIC H, and a SPECIFIC D that Correct(H(D))
doesn't have a True or False answer. Try to show the case.

Remember H is a SPECIFIC TM, (since H ∈ TM) not a "set" of Turing
Machines. Your "Correct" predicate doesn't take a "set" of Turing
Machines, but an individual Turing Machine, and the "Pathological" D
isn't built on a "Set" of Turing Machine, but an individual one.

The actual question is about a specific input, and that ALWAYS has a
correct answer, its just that some machihes won't get it right. And we
can show that for EVERY decider we can make, there WILL be some specific
input (depending on the specific decider we are looking at) that the
decider WILL get wrong.

Thus, non-computable valid problems exist, as shown by theory.

>
> *No one pays attention to what this impossibility means*
> The halting problem is defined as an unsatisfiable specification thus
> isomorphic to a question that has been defined to have no correct
> answer.

Nope, again your ignorance of the probem.

>
> What time is it (yes or no)?
> has no correct answer because there is something wrong with the
> question. In this case we know to blame the question and not the one
> answering it.

Right, THAT question has no correct answer.

Does D halt, HAS a correct answer, H just doesn't give it.

DIFFERENCE.

Shows you don't understand the problem.

>
> When we understand that there are some inputs to every TM H that
> contradict both Boolean return values that H could return then the
> question: Does your input halt? is essentially a self-contradictory
> (thus incorrect) question in these cases.

But there IS a "Correct Answer", so the QUESTION isn't actualy
self-contradictory.

You are showing your stupidity,

>
> The inability to correctly answer an incorrect question places no actual
> limit on anyone or anything.

Sure does, but you are too stupd to understand.

>
> This insight opens up an alternative treatment of these pathological
> inputs the same way that ZFC handled Russell's Paradox.
>

Nope. ZFC handled Russell's Paradox by deciding that we can't actually
logically talk about a truely "Unversal" set of all possible sets.

At best, your equivalence is just the admission that there IS a
limitation to computabilty, that there exist a class of properties of
Turing Machine that does exist and is valid (as the property is defined
for all Turing Machines) but can not be computed by another Turing
machine, given a proper description of the machine to be decided on.

That is EXACTLY the statement you have been trying to DISPROVE for all
these years, but seem to now be accepting, but still saying it doesn't
affect anything.

You are ADMITTING some things are not computable, and then saying this
fact doesn't limit what a computation can do.

That is like saying I know I can't get this car over 80 MPH, but there
is no limit to how fast this car can go.

Just a pitiful LIE.

Re: Does the halting problem actually limit what computers can do?

<85c5e6d0-5b1a-affc-fe74-8754535622b0@att.net>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11961&group=comp.ai.philosophy#11961

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: james.g.burns@att.net (Jim Burns)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 14:26:27 -0400
Organization: A noiseless patient Spider
Lines: 38
Message-ID: <85c5e6d0-5b1a-affc-fe74-8754535622b0@att.net>
References: <uhm4r5$7n5$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Info: dont-email.me; posting-host="12e43a75bfac7c9a90a15e4171890229";
logging-data="28670"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19wvBlcL3XIp/AIksH39V38+XIqOL3+9ZQ="
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:102.0) Gecko/20100101
Thunderbird/102.15.1
Cancel-Lock: sha1:KUwgHkWKL64kIJpM+9rqPVEqIAw=
In-Reply-To: <uhm4r5$7n5$1@dont-email.me>
Content-Language: en-US
 by: Jim Burns - Sun, 29 Oct 2023 18:26 UTC

On 10/29/2023 1:30 PM, olcott wrote:

> [Subject: Does the halting problem
> actually limit what computers can do?]

> The inability to correctly answer
> an incorrect question places
> no actual limit on anyone or anything.

The inability of a computer program
to correctly answer all halting-questions
*places* no actual limit on anyone or anything.

That's not how a theorem works.

Nothing which a theorem is about _changes_
in response to a proof.

_We_ change in response to a proof.
Our state of knowledge changes.

Before we know that
no computer program decides all halting questions,
no computer program decides all halting questions.

The difference, before and after,
is in _what we know_

----
We finites are able to learn of
the existence of a wall of infinitely-many bricks
without our having stacked infinitely-many bricks
one on another.

All I am saying is:
Nice!

Re: Does the halting problem actually limit what computers can do?

<uhm8o9$vle$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11962&group=comp.ai.philosophy#11962

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!news.hispagatos.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 13:36:55 -0500
Organization: A noiseless patient Spider
Lines: 52
Message-ID: <uhm8o9$vle$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 18:36:57 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="1b074cc8a65e3858c9d9d079519a8a78";
logging-data="32430"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/5k9pPhJOz1XZ0sXSj6a1l"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:c7w4XgdHBWvPRGsHrnAf3MfFAow=
In-Reply-To: <uhm4r5$7n5$1@dont-email.me>
Content-Language: en-US
 by: olcott - Sun, 29 Oct 2023 18:36 UTC

On 10/29/2023 12:30 PM, olcott wrote:
> *Everyone agrees that this is impossible*
> No computer program H can correctly predict what another computer
> program D will do when D has been programmed to do the opposite of
> whatever H says.
>
> H(D) is functional notation that specifies the return value from H(D)
> Correct(H(D)==false) means that H(D) is correct that D does not halt
> Correct(H(D)==true) means that H(D) is correct that D does halt
>
> For all H ∈ TM there exists input D such that
> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>
> *No one pays attention to what this impossibility means*
> The halting problem is defined as an unsatisfiable specification thus
> isomorphic to a question that has been defined to have no correct
> answer.
>
> What time is it (yes or no)?
> has no correct answer because there is something wrong with the
> question. In this case we know to blame the question and not the one
> answering it.
>
> When we understand that there are some inputs to every TM H that
> contradict both Boolean return values that H could return then the
> question: Does your input halt? is essentially a self-contradictory
> (thus incorrect) question in these cases.
>
> The inability to correctly answer an incorrect question places no actual
> limit on anyone or anything.
>
> This insight opens up an alternative treatment of these pathological
> inputs the same way that ZFC handled Russell's Paradox.
>

No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.

Every H of the infinite set of all Turing machines gets the wrong
answer on their corresponding input D because this input D
essentially derives a self-contradictory thus incorrect question
for this H.

Like the question: What time is it (yes or no)?
the blame for the lack of a correct answer goes to the question
and not the one attempting to answer it.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhm8t6$36q2u$2@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11963&group=comp.ai.philosophy#11963

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 11:39:34 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhm8t6$36q2u$2@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 18:39:35 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3369054"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhm4r5$7n5$1@dont-email.me>
 by: Richard Damon - Sun, 29 Oct 2023 18:39 UTC

On 10/29/23 10:30 AM, olcott wrote:
> *Everyone agrees that this is impossible*
> No computer program H can correctly predict what another computer
> program D will do when D has been programmed to do the opposite of
> whatever H says.
>
> H(D) is functional notation that specifies the return value from H(D)
> Correct(H(D)==false) means that H(D) is correct that D does not halt
> Correct(H(D)==true) means that H(D) is correct that D does halt

Noticed that I misread what "Correct" was defined as.

Note, that Correct(H(D) == value), where value is True/False can only be
true for the one value that H(D) does return, and the other, it can
NEVER be true.

Correct, as you have defined it, can't be used to determine if a
question actually has a correct value, only if H is correct in giving
its answer.

>
> For all H ∈ TM there exists input D such that
> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
First, that ISN'T necessarily a true statement, unless you are stating
that D is a dependent variable such that:

for all H ∈ TM, there exist a D ∈ representation(TM) such that
(Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

So, all you are saying here is that for all H there exists a D that H(D)
happens to get the wrong answer. So what.

To point out the limitiation of your "Correct" predicate imagine that if
H instead of being a Halt Detector, was a Prime detector, but was
incorrectly programmed and it though 2 was not prime, then

H(2) == False

Correct(H(2) == true) is false since H(2) doesn't return 2, so it wasn't
correct in saying 2, and

Correct(H(2) == false) is false, since 2 is prime, so H is not correct
in saying it is not prime.

Thus: (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false

Doesn't say that the question is invalid, just that H got the answer wrong.

The fact that you can say the same for ALL possible Turing Macines,
still doesn't make the question "Wrong", just uncomputable.

You don't seem to understand that H(D) is a FIXED VALUE based on the
program of H, and that can ligitimately be WRONG

>
> *No one pays attention to what this impossibility means*
> The halting problem is defined as an unsatisfiable specification thus
> isomorphic to a question that has been defined to have no correct
> answer.
>
> What time is it (yes or no)?
> has no correct answer because there is something wrong with the
> question. In this case we know to blame the question and not the one
> answering it.
>
> When we understand that there are some inputs to every TM H that
> contradict both Boolean return values that H could return then the
> question: Does your input halt? is essentially a self-contradictory
> (thus incorrect) question in these cases.
>
> The inability to correctly answer an incorrect question places no actual
> limit on anyone or anything.
>
> This insight opens up an alternative treatment of these pathological
> inputs the same way that ZFC handled Russell's Paradox.
>

Re: Does the halting problem actually limit what computers can do?

<uhm960$vle$2@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11964&group=comp.ai.philosophy#11964

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 13:44:16 -0500
Organization: A noiseless patient Spider
Lines: 54
Message-ID: <uhm960$vle$2@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 18:44:17 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="1b074cc8a65e3858c9d9d079519a8a78";
logging-data="32430"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+wYjgxsyPwhy33tZBTVtF+"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:VozNFfEgg3qwjddy0O7mAsAw7gE=
Content-Language: en-US
In-Reply-To: <uhm4r5$7n5$1@dont-email.me>
 by: olcott - Sun, 29 Oct 2023 18:44 UTC

On 10/29/2023 12:30 PM, olcott wrote:
> *Everyone agrees that this is impossible*
> No computer program H can correctly predict what another computer
> program D will do when D has been programmed to do the opposite of
> whatever H says.
>
> H(D) is functional notation that specifies the return value from H(D)
> Correct(H(D)==false) means that H(D) is correct that D does not halt
> Correct(H(D)==true) means that H(D) is correct that D does halt
>
> For all H ∈ TM there exists input D such that
> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>
> *No one pays attention to what this impossibility means*
> The halting problem is defined as an unsatisfiable specification thus
> isomorphic to a question that has been defined to have no correct
> answer.
>
> What time is it (yes or no)?
> has no correct answer because there is something wrong with the
> question. In this case we know to blame the question and not the one
> answering it.
>
> When we understand that there are some inputs to every TM H that
> contradict both Boolean return values that H could return then the
> question: Does your input halt? is essentially a self-contradictory
> (thus incorrect) question in these cases.
>
> The inability to correctly answer an incorrect question places no actual
> limit on anyone or anything.
>
> This insight opens up an alternative treatment of these pathological
> inputs the same way that ZFC handled Russell's Paradox.
>

No computer program H can correctly predict what another computer
program D will do when D has been programmed to do the opposite of
whatever H says.

Every H of the infinite set of all Turing machines gets the wrong
answer on their corresponding input D because this input D
essentially derives a self-contradictory thus incorrect question
for this H.

Like the question: What time is it (yes or no)?
the blame for the lack of a correct answer goes to the question
and not the one attempting to answer it.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhmauh$36q2u$3@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11965&group=comp.ai.philosophy#11965

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 12:14:25 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhmauh$36q2u$3@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uhm960$vle$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 19:14:25 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3369054"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <uhm960$vle$2@dont-email.me>
Content-Language: en-US
 by: Richard Damon - Sun, 29 Oct 2023 19:14 UTC

On 10/29/23 11:44 AM, olcott wrote:
> On 10/29/2023 12:30 PM, olcott wrote:
>> *Everyone agrees that this is impossible*
>> No computer program H can correctly predict what another computer
>> program D will do when D has been programmed to do the opposite of
>> whatever H says.
>>
>> H(D) is functional notation that specifies the return value from H(D)
>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>
>> For all H ∈ TM there exists input D such that
>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>
>> *No one pays attention to what this impossibility means*
>> The halting problem is defined as an unsatisfiable specification thus
>> isomorphic to a question that has been defined to have no correct
>> answer.
>>
>> What time is it (yes or no)?
>> has no correct answer because there is something wrong with the
>> question. In this case we know to blame the question and not the one
>> answering it.
>>
>> When we understand that there are some inputs to every TM H that
>> contradict both Boolean return values that H could return then the
>> question: Does your input halt? is essentially a self-contradictory
>> (thus incorrect) question in these cases.
>>
>> The inability to correctly answer an incorrect question places no actual
>> limit on anyone or anything.
>>
>> This insight opens up an alternative treatment of these pathological
>> inputs the same way that ZFC handled Russell's Paradox.
>>
>
>
>
> No computer program H can correctly predict what another computer
> program D will do when D has been programmed to do the opposite of
> whatever H says.

So?

Who says the need to be able to do it?

That is EXACTLY what the Theorem is proving, and which you admit, but
you want to refuse the logical consequence of it, because you don't

>
> Every H of the infinite set of all Turing machines gets the wrong
> answer on their corresponding input D because this input D
> essentially derives a self-contradictory thus incorrect question
> for this H.

Nope, you are confused by mixing sets with objects in the set.

Nice Category error there.

Every question in that set had a correct answer, that might have been
given by some members of the deciders in that set. That shows that the
actual QUESTION is VALID and not "self-contradictory"

The fact that every instance of the question has a correct answer, makes
it VALID.

The fact that every decider has such a question that it can't answer,
makes it uncomputable.

The fact that your Strawman version (What can H return to be correct)
doesn't have an answer is just part of the proof that the actual theorem
is proven, and just shows your ignorance of the subject.

>
> Like the question: What time is it (yes or no)?
> the blame for the lack of a correct answer goes to the question
> and not the one attempting to answer it.
>

Nope, Strawman. you like Strawman, I guess because they are just as
smart as you.

What time is it (yes or no)? doesn't have an answer.

Does a particual D(D) Halt, DOES have an answer, and it will always be
the opposite of what the H(D,D) returns for the SPECIFIC H that D was
built to refute.

Having an answer that ONE machine can't answer correctly is not like a
question that doesn't actually have a answer (due to a category error in
this case) is not the same.

Your thinking they are the same just proves your stupidity.

Re: Does the halting problem actually limit what computers can do?

<uhmauk$36q2u$4@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11966&group=comp.ai.philosophy#11966

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 12:14:28 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhmauk$36q2u$4@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uhm8o9$vle$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 19:14:28 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3369054"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <uhm8o9$vle$1@dont-email.me>
Content-Language: en-US
 by: Richard Damon - Sun, 29 Oct 2023 19:14 UTC

On 10/29/23 11:36 AM, olcott wrote:
> On 10/29/2023 12:30 PM, olcott wrote:
>> *Everyone agrees that this is impossible*
>> No computer program H can correctly predict what another computer
>> program D will do when D has been programmed to do the opposite of
>> whatever H says.
>>
>> H(D) is functional notation that specifies the return value from H(D)
>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>
>> For all H ∈ TM there exists input D such that
>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>
>> *No one pays attention to what this impossibility means*
>> The halting problem is defined as an unsatisfiable specification thus
>> isomorphic to a question that has been defined to have no correct
>> answer.
>>
>> What time is it (yes or no)?
>> has no correct answer because there is something wrong with the
>> question. In this case we know to blame the question and not the one
>> answering it.
>>
>> When we understand that there are some inputs to every TM H that
>> contradict both Boolean return values that H could return then the
>> question: Does your input halt? is essentially a self-contradictory
>> (thus incorrect) question in these cases.
>>
>> The inability to correctly answer an incorrect question places no actual
>> limit on anyone or anything.
>>
>> This insight opens up an alternative treatment of these pathological
>> inputs the same way that ZFC handled Russell's Paradox.
>>
>
> No computer program H can correctly predict what another computer
> program D will do when D has been programmed to do the opposite of
> whatever H says.
>
So?

Who says the need to be able to do it?

That is EXACTLY what the Theorem is proving, and which you admit, but
you want to refuse the logical consequence of it, because you don't
actually understand how logic or Truth works.

> Every H of the infinite set of all Turing machines gets the wrong
> answer on their corresponding input D because this input D
> essentially derives a self-contradictory thus incorrect question
> for this H.

Nope, you are confused by mixing sets with objects in the set.

Nice Category error there.

Every question in that set had a correct answer, that might have been
given by some members of the deciders in that set. That shows that the
actual QUESTION is VALID and not "self-contradictory"

The fact that every instance of the question has a correct answer, makes
it VALID.

The fact that every decider has such a question that it can't answer,
makes it uncomputable.

The fact that your Strawman version (What can H return to be correct)
doesn't have an answer is just part of the proof that the actual theorem
is proven, and just shows your ignorance of the subject.

>
> Like the question: What time is it (yes or no)?
> the blame for the lack of a correct answer goes to the question
> and not the one attempting to answer it.
>

Nope, Strawman. you like Strawman, I guess because they are just as
smart as you.

What time is it (yes or no)? doesn't have an answer.

Does a particual D(D) Halt, DOES have an answer, and it will always be
the opposite of what the H(D,D) returns for the SPECIFIC H that D was
built to refute.

Having an answer that ONE machine can't answer correctly is not like a
question that doesn't actually have a answer (due to a category error in
this case) is not the same.

Your thinking they are the same just proves your stupidity.

Re: Does the halting problem actually limit what computers can do?

<uhmbid$19s3$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11967&group=comp.ai.philosophy#11967

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 14:25:00 -0500
Organization: A noiseless patient Spider
Lines: 63
Message-ID: <uhmbid$19s3$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uhm960$vle$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 19:25:01 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="1b074cc8a65e3858c9d9d079519a8a78";
logging-data="42883"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/oSL+pq+7Le7rig8KjNEsz"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:+1OPVyIyNOt1LxFA6eNyHm7LxnE=
Content-Language: en-US
In-Reply-To: <uhm960$vle$2@dont-email.me>
 by: olcott - Sun, 29 Oct 2023 19:25 UTC

On 10/29/2023 1:44 PM, olcott wrote:
> On 10/29/2023 12:30 PM, olcott wrote:
>> *Everyone agrees that this is impossible*
>> No computer program H can correctly predict what another computer
>> program D will do when D has been programmed to do the opposite of
>> whatever H says.
>>
>> H(D) is functional notation that specifies the return value from H(D)
>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>
>> For all H ∈ TM there exists input D such that
>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>
>> *No one pays attention to what this impossibility means*
>> The halting problem is defined as an unsatisfiable specification thus
>> isomorphic to a question that has been defined to have no correct
>> answer.
>>
>> What time is it (yes or no)?
>> has no correct answer because there is something wrong with the
>> question. In this case we know to blame the question and not the one
>> answering it.
>>
>> When we understand that there are some inputs to every TM H that
>> contradict both Boolean return values that H could return then the
>> question: Does your input halt? is essentially a self-contradictory
>> (thus incorrect) question in these cases.
>>
>> The inability to correctly answer an incorrect question places no actual
>> limit on anyone or anything.
>>
>> This insight opens up an alternative treatment of these pathological
>> inputs the same way that ZFC handled Russell's Paradox.
>>
>
>
>
> No computer program H can correctly predict what another computer
> program D will do when D has been programmed to do the opposite of
> whatever H says.
>
> Every H of the infinite set of all Turing machines gets the wrong
> answer on their corresponding input D because this input D
> essentially derives a self-contradictory thus incorrect question
> for this H.
>

Changing the subject to a different H for this same input D is
the strawman deception.

Ignoring the context of who is asked the question deceptively
changes the meaning of the question.

> Like the question: What time is it (yes or no)?
> the blame for the lack of a correct answer goes to the question
> and not the one attempting to answer it.
>

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhmdpr$36q2u$5@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11968&group=comp.ai.philosophy#11968

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 13:03:07 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhmdpr$36q2u$5@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uhm960$vle$2@dont-email.me>
<uhmbid$19s3$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 20:03:07 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3369054"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhmbid$19s3$1@dont-email.me>
 by: Richard Damon - Sun, 29 Oct 2023 20:03 UTC

On 10/29/23 12:25 PM, olcott wrote:
> On 10/29/2023 1:44 PM, olcott wrote:
>> On 10/29/2023 12:30 PM, olcott wrote:
>>> *Everyone agrees that this is impossible*
>>> No computer program H can correctly predict what another computer
>>> program D will do when D has been programmed to do the opposite of
>>> whatever H says.
>>>
>>> H(D) is functional notation that specifies the return value from H(D)
>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>
>>> For all H ∈ TM there exists input D such that
>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>
>>> *No one pays attention to what this impossibility means*
>>> The halting problem is defined as an unsatisfiable specification thus
>>> isomorphic to a question that has been defined to have no correct
>>> answer.
>>>
>>> What time is it (yes or no)?
>>> has no correct answer because there is something wrong with the
>>> question. In this case we know to blame the question and not the one
>>> answering it.
>>>
>>> When we understand that there are some inputs to every TM H that
>>> contradict both Boolean return values that H could return then the
>>> question: Does your input halt? is essentially a self-contradictory
>>> (thus incorrect) question in these cases.
>>>
>>> The inability to correctly answer an incorrect question places no actual
>>> limit on anyone or anything.
>>>
>>> This insight opens up an alternative treatment of these pathological
>>> inputs the same way that ZFC handled Russell's Paradox.
>>>
>>
>>
>>
>> No computer program H can correctly predict what another computer
>> program D will do when D has been programmed to do the opposite of
>> whatever H says.
>>
>> Every H of the infinite set of all Turing machines gets the wrong
>> answer on their corresponding input D because this input D
>> essentially derives a self-contradictory thus incorrect question
>> for this H.
>>
>
> Changing the subject to a different H for this same input D is
> the strawman deception.

YOU'RE the one that said "for all H", so the strawman is YOURS

>
> Ignoring the context of who is asked the question deceptively
> changes the meaning of the question.

Except that when the question's answer isn't affected by the context it
is asked,

"Does a SPECIFIED D(D) Halt?" is INDEPENDENT of whou you ask.

So, you are just showing you deceitfulness because will the question is
each time, about a SPECIFIC input, you try to change it to the input
associated with the decider deciding it, which is not a valid input.

You are just showing your stupidity by the form of your arguments.

>
>> Like the question: What time is it (yes or no)?
>> the blame for the lack of a correct answer goes to the question
>> and not the one attempting to answer it.
>>
>

Re: Does the halting problem actually limit what computers can do?

<uhme7j$1umr$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11969&group=comp.ai.philosophy#11969

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 15:10:27 -0500
Organization: A noiseless patient Spider
Lines: 60
Message-ID: <uhme7j$1umr$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 20:10:28 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="1b074cc8a65e3858c9d9d079519a8a78";
logging-data="64219"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+T7hXhXqERKEVmS868dBVT"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:TKkpGVPPtF96K9Q6R3KZbb9TMUg=
In-Reply-To: <uhm4r5$7n5$1@dont-email.me>
Content-Language: en-US
 by: olcott - Sun, 29 Oct 2023 20:10 UTC

On 10/29/2023 12:30 PM, olcott wrote:
> *Everyone agrees that this is impossible*
> No computer program H can correctly predict what another computer
> program D will do when D has been programmed to do the opposite of
> whatever H says.
>
> H(D) is functional notation that specifies the return value from H(D)
> Correct(H(D)==false) means that H(D) is correct that D does not halt
> Correct(H(D)==true) means that H(D) is correct that D does halt
>
> For all H ∈ TM there exists input D such that
> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>
> *No one pays attention to what this impossibility means*
> The halting problem is defined as an unsatisfiable specification thus
> isomorphic to a question that has been defined to have no correct
> answer.
>
> What time is it (yes or no)?
> has no correct answer because there is something wrong with the
> question. In this case we know to blame the question and not the one
> answering it.
>
> When we understand that there are some inputs to every TM H that
> contradict both Boolean return values that H could return then the
> question: Does your input halt? is essentially a self-contradictory
> (thus incorrect) question in these cases.
>
> The inability to correctly answer an incorrect question places no actual
> limit on anyone or anything.
>
> This insight opens up an alternative treatment of these pathological
> inputs the same way that ZFC handled Russell's Paradox.
>

Every H of the infinite set of all Turing machines gets the wrong
answer

on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D

because this input D essentially derives a self-contradictory thus
incorrect question for this H.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhmeh6$1umr$2@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11970&group=comp.ai.philosophy#11970

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 15:15:33 -0500
Organization: A noiseless patient Spider
Lines: 64
Message-ID: <uhmeh6$1umr$2@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 20:15:34 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="1b074cc8a65e3858c9d9d079519a8a78";
logging-data="64219"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19FmJK+/CC7FnaT5jfb9c5S"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:KcYXnctqh0su8Umh5MPly2b/UfI=
Content-Language: en-US
In-Reply-To: <uhm4r5$7n5$1@dont-email.me>
 by: olcott - Sun, 29 Oct 2023 20:15 UTC

On 10/29/2023 12:30 PM, olcott wrote:
> *Everyone agrees that this is impossible*
> No computer program H can correctly predict what another computer
> program D will do when D has been programmed to do the opposite of
> whatever H says.
>
> H(D) is functional notation that specifies the return value from H(D)
> Correct(H(D)==false) means that H(D) is correct that D does not halt
> Correct(H(D)==true) means that H(D) is correct that D does halt
>
> For all H ∈ TM there exists input D such that
> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>
> *No one pays attention to what this impossibility means*
> The halting problem is defined as an unsatisfiable specification thus
> isomorphic to a question that has been defined to have no correct
> answer.
>
> What time is it (yes or no)?
> has no correct answer because there is something wrong with the
> question. In this case we know to blame the question and not the one
> answering it.
>
> When we understand that there are some inputs to every TM H that
> contradict both Boolean return values that H could return then the
> question: Does your input halt? is essentially a self-contradictory
> (thus incorrect) question in these cases.
>
> The inability to correctly answer an incorrect question places no actual
> limit on anyone or anything.
>
> This insight opens up an alternative treatment of these pathological
> inputs the same way that ZFC handled Russell's Paradox.
>

Every H of the infinite set of all Turing machines gets the wrong
answer

on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D
on their corresponding input D

because this input D
because this input D
because this input D
because this input D
because this input D

essentially derives a self-contradictory thus

incorrect question for this H.
incorrect question for this H.
incorrect question for this H.
incorrect question for this H.
incorrect question for this H.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhmfoq$36q2u$6@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11971&group=comp.ai.philosophy#11971

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 13:36:42 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhmfoq$36q2u$6@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uhme7j$1umr$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 20:36:42 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3369054"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhme7j$1umr$1@dont-email.me>
 by: Richard Damon - Sun, 29 Oct 2023 20:36 UTC

On 10/29/23 1:10 PM, olcott wrote:
> On 10/29/2023 12:30 PM, olcott wrote:
>> *Everyone agrees that this is impossible*
>> No computer program H can correctly predict what another computer
>> program D will do when D has been programmed to do the opposite of
>> whatever H says.
>>
>> H(D) is functional notation that specifies the return value from H(D)
>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>
>> For all H ∈ TM there exists input D such that
>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>
>> *No one pays attention to what this impossibility means*
>> The halting problem is defined as an unsatisfiable specification thus
>> isomorphic to a question that has been defined to have no correct
>> answer.
>>
>> What time is it (yes or no)?
>> has no correct answer because there is something wrong with the
>> question. In this case we know to blame the question and not the one
>> answering it.
>>
>> When we understand that there are some inputs to every TM H that
>> contradict both Boolean return values that H could return then the
>> question: Does your input halt? is essentially a self-contradictory
>> (thus incorrect) question in these cases.
>>
>> The inability to correctly answer an incorrect question places no actual
>> limit on anyone or anything.
>>
>> This insight opens up an alternative treatment of these pathological
>> inputs the same way that ZFC handled Russell's Paradox.
>>
>
> Every H of the infinite set of all Turing machines gets the wrong
> answer
>
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
>
> because this input D essentially derives a self-contradictory thus
> incorrect question for this H.
>
>

Almost, but each is a DIFFERENT Question, and all the questions have
answer, and thus are VALID.

D isn't "Self-Contradictory", it is contadictory to a DIFFERENT machine
then itself.

I guess you are just showing you dont' know the meaning of "self"
because you are too stupid.

(and acting like a two year old in repeating your erroneous claim over
and over as a BIG LIE thinking that makes it more correct.

You still refuse to actually try to point out the actual errors in my
statement but continue to repeat your proven wrong statements, showing
that you are just a pitiful logical idiot.

"Does (a specific) D(D) as specified by the input Halt?" is a valid
question as it has a correct answer.

The fact we can come up with a D (different in each case) for ANY H, as
you have admitted, means the question is not computable.

Maybe you should try to prove your point with more that just an appeal
to a (proven incorrrect) authority (namely you).

Try starting out with some actual accepted definition of the terms and
use some sound logic (not sure you know any) to try to make you point.

Remember, the question you are trying to prove invalid is:

"Does the specific computation described by the input Halt when run?"

and not "What does H need to return to get the right answer?" (which is
an invalid question, as for ANY specific H, it CAN only return the
answer that its algorithm will compute, and a given H has a specified
specific algorithm).

and also not, "Does an H exist that can return the right value for the
D(D) derived fron it?" as that is asking not about a specific input, but
about the existance of a machine to compute something. Non-existance of
machines to do something is NOT a "error", but a sign the problem is
uncomputable, which is exactly the type of questin that Computabilyt
Theory investigates. What sort of questions ARE computable, and which
are not. Not being computable is an acceptable state for a problem.

Re: Does the halting problem actually limit what computers can do?

<uhmfv0$36q2u$7@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11972&group=comp.ai.philosophy#11972

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 13:40:00 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhmfv0$36q2u$7@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uhmeh6$1umr$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 20:40:00 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3369054"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhmeh6$1umr$2@dont-email.me>
 by: Richard Damon - Sun, 29 Oct 2023 20:40 UTC

On 10/29/23 1:15 PM, olcott wrote:
> On 10/29/2023 12:30 PM, olcott wrote:
>> *Everyone agrees that this is impossible*
>> No computer program H can correctly predict what another computer
>> program D will do when D has been programmed to do the opposite of
>> whatever H says.
>>
>> H(D) is functional notation that specifies the return value from H(D)
>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>
>> For all H ∈ TM there exists input D such that
>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>
>> *No one pays attention to what this impossibility means*
>> The halting problem is defined as an unsatisfiable specification thus
>> isomorphic to a question that has been defined to have no correct
>> answer.
>>
>> What time is it (yes or no)?
>> has no correct answer because there is something wrong with the
>> question. In this case we know to blame the question and not the one
>> answering it.
>>
>> When we understand that there are some inputs to every TM H that
>> contradict both Boolean return values that H could return then the
>> question: Does your input halt? is essentially a self-contradictory
>> (thus incorrect) question in these cases.
>>
>> The inability to correctly answer an incorrect question places no actual
>> limit on anyone or anything.
>>
>> This insight opens up an alternative treatment of these pathological
>> inputs the same way that ZFC handled Russell's Paradox.
>>
>
>
> Every H of the infinite set of all Turing machines gets the wrong
> answer
>
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
> on their corresponding input D
>
> because this input D
> because this input D
> because this input D
> because this input D
> because this input D
>
> essentially derives a self-contradictory thus
>
> incorrect question for this H.
> incorrect question for this H.
> incorrect question for this H.
> incorrect question for this H.
> incorrect question for this H.
>
>

Repeating your answer because you weren't two-yearold enough first time?

You also have a category error as you are conflating H as an "every"
machine of the set with THIS machie of the set.

For THIS machine of the set, and THIS D of the set, there IS an answer,
so the question is valid.

and, but each is a DIFFERENT Question, and all the questions have
answer, and thus are also VALID.

D isn't "Self-Contradictory", it is contradictory to a DIFFERENT machine
then itself.

I guess you are just showing you dont' know the meaning of "self"
because you are too stupid.

(and acting like a two year old in repeating your erroneous claim over
and over as a BIG LIE thinking that makes it more correct.

You still refuse to actually try to point out the actual errors in my
statement but continue to repeat your proven wrong statements, showing
that you are just a pitiful logical idiot.

"Does (a specific) D(D) as specified by the input Halt?" is a valid
question as it has a correct answer.

The fact we can come up with a D (different in each case) for ANY H, as
you have admitted, means the question is not computable.

Maybe you should try to prove your point with more that just an appeal
to a (proven incorrrect) authority (namely you).

Try starting out with some actual accepted definition of the terms and
use some sound logic (not sure you know any) to try to make you point.

Remember, the question you are trying to prove invalid is:

"Does the specific computation described by the input Halt when run?"

and not "What does H need to return to get the right answer?" (which is
an invalid question, as for ANY specific H, it CAN only return the
answer that its algorithm will compute, and a given H has a specified
specific algorithm).

and also not, "Does an H exist that can return the right value for the
D(D) derived fron it?" as that is asking not about a specific input, but
about the existance of a machine to compute something. Non-existance of
machines to do something is NOT a "error", but a sign the problem is
uncomputable, which is exactly the type of questin that Computabilyt
Theory investigates. What sort of questions ARE computable, and which
are not. Not being computable is an acceptable state for a problem.
[http://www.mozilla.com/thunderbird/]
[Options]

Re: Does the halting problem actually limit what computers can do?

<uhmh16$2ebu$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11973&group=comp.ai.philosophy#11973

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 15:58:11 -0500
Organization: A noiseless patient Spider
Lines: 52
Message-ID: <uhmh16$2ebu$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 20:58:14 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="1b074cc8a65e3858c9d9d079519a8a78";
logging-data="80254"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18IQyAzZIZ3E2UXH2GZE2vJ"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:weCj+WApUo4bHsYEmmwiCHvMBXw=
In-Reply-To: <uhm4r5$7n5$1@dont-email.me>
Content-Language: en-US
 by: olcott - Sun, 29 Oct 2023 20:58 UTC

On 10/29/2023 12:30 PM, olcott wrote:
> *Everyone agrees that this is impossible*
> No computer program H can correctly predict what another computer
> program D will do when D has been programmed to do the opposite of
> whatever H says.
>
> H(D) is functional notation that specifies the return value from H(D)
> Correct(H(D)==false) means that H(D) is correct that D does not halt
> Correct(H(D)==true) means that H(D) is correct that D does halt
>
> For all H ∈ TM there exists input D such that
> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>
> *No one pays attention to what this impossibility means*
> The halting problem is defined as an unsatisfiable specification thus
> isomorphic to a question that has been defined to have no correct
> answer.
>
> What time is it (yes or no)?
> has no correct answer because there is something wrong with the
> question. In this case we know to blame the question and not the one
> answering it.
>
> When we understand that there are some inputs to every TM H that
> contradict both Boolean return values that H could return then the
> question: Does your input halt? is essentially a self-contradictory
> (thus incorrect) question in these cases.
>
> The inability to correctly answer an incorrect question places no actual
> limit on anyone or anything.
>
> This insight opens up an alternative treatment of these pathological
> inputs the same way that ZFC handled Russell's Paradox.
>

The halting problem proofs merely show that the problem
definition is unsatisfiable because every H of the infinite
set of all Turing Machines has an input that makes the
question: Does your input halt? into a self-contradictory
thus incorrect question for this H.

The only rebuttals to this in the last two years rely
on one form of the strawman deception of another.

*Stupid or dishonest people may say otherwise*
That every D has a halt decider has nothing to do with
the claim that every H has an undecidable input.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhmjqm$36q2u$8@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11974&group=comp.ai.philosophy#11974

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 14:45:58 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhmjqm$36q2u$8@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uhmh16$2ebu$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 21:45:59 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3369054"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <uhmh16$2ebu$1@dont-email.me>
Content-Language: en-US
 by: Richard Damon - Sun, 29 Oct 2023 21:45 UTC

On 10/29/23 1:58 PM, olcott wrote:
> On 10/29/2023 12:30 PM, olcott wrote:
>> *Everyone agrees that this is impossible*
>> No computer program H can correctly predict what another computer
>> program D will do when D has been programmed to do the opposite of
>> whatever H says.
>>
>> H(D) is functional notation that specifies the return value from H(D)
>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>
>> For all H ∈ TM there exists input D such that
>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>
>> *No one pays attention to what this impossibility means*
>> The halting problem is defined as an unsatisfiable specification thus
>> isomorphic to a question that has been defined to have no correct
>> answer.
>>
>> What time is it (yes or no)?
>> has no correct answer because there is something wrong with the
>> question. In this case we know to blame the question and not the one
>> answering it.
>>
>> When we understand that there are some inputs to every TM H that
>> contradict both Boolean return values that H could return then the
>> question: Does your input halt? is essentially a self-contradictory
>> (thus incorrect) question in these cases.
>>
>> The inability to correctly answer an incorrect question places no actual
>> limit on anyone or anything.
>>
>> This insight opens up an alternative treatment of these pathological
>> inputs the same way that ZFC handled Russell's Paradox.
>>
>
> The halting problem proofs merely show that the problem
> definition is unsatisfiable because every H of the infinite
> set of all Turing Machines has an input that makes the
> question: Does your input halt? into a self-contradictory
> thus incorrect question for this H.

So, you are just showing that you don't know what "satisfiable" means in
logic, just showing off your ignorance (even though you have been told
before, I guess you are to stupid to learn).

You also seem to not understand what the "self" part of
"self-contradictory" means, again, because you are too stupid to
understand when taught.

You also are repeating your category error by confusing specific
questions for sets of questios.

>
> The only rebuttals to this in the last two years rely
> on one form of the strawman deception of another.
>

Nope, your failure to actually point to an error shows that you don't
understand how logic works.

If my replies are strawman, you can point to the claim that isn't
actually correct, and reference the accepted definition of the problem
to show where they differ.

The problem here is that you are just projecting, as a fundamental part
of the problem is you try to change the fundamental nature of the
problem by building your own strawman, and when I knock them down, you
claim my reassertion of the actual problem is a strawman, because you
can't recognise the actual problem.

> *Stupid or dishonest people may say otherwise*
> That every D has a halt decider has nothing to do with
> the claim that every H has an undecidable input.
>

So, more stupid errors.

the "input" is not "undecidable", as for every specific H, there is a
specific D(D), and that input has a definite behavior so the quesiton of
its Halt is valid.

Also, due to the limited nature of your H's design, that inputs behavior
IS decidable by another decider, and "decidable" just requires that
there exist SOME decider (which doesn't need to be your H) that can
answer the question correctly, and that exists, you have even shown how
to build it (your H1).

Thus, it isn't that the "input" is undecidable, it is that the PROBLEM
isn't, as no one machine can compute the answer for every possible input.

AGAIN, you are showing your STUPIDITY and IGNORANCE.

Re: Does the halting problem actually limit what computers can do?

<uhmmsr$3ft2$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11975&group=comp.ai.philosophy#11975

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 17:38:17 -0500
Organization: A noiseless patient Spider
Lines: 58
Message-ID: <uhmmsr$3ft2$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uhmh16$2ebu$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 22:38:19 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="1b074cc8a65e3858c9d9d079519a8a78";
logging-data="114594"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/pRuURf8ktGpzfuEq8Tws2"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:F4hARiQiFHY2Rdb2pelRtILLTew=
In-Reply-To: <uhmh16$2ebu$1@dont-email.me>
Content-Language: en-US
 by: olcott - Sun, 29 Oct 2023 22:38 UTC

On 10/29/2023 3:58 PM, olcott wrote:
> On 10/29/2023 12:30 PM, olcott wrote:
>> *Everyone agrees that this is impossible*
>> No computer program H can correctly predict what another computer
>> program D will do when D has been programmed to do the opposite of
>> whatever H says.
>>
>> H(D) is functional notation that specifies the return value from H(D)
>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>
>> For all H ∈ TM there exists input D such that
>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>
>> *No one pays attention to what this impossibility means*
>> The halting problem is defined as an unsatisfiable specification thus
>> isomorphic to a question that has been defined to have no correct
>> answer.
>>
>> What time is it (yes or no)?
>> has no correct answer because there is something wrong with the
>> question. In this case we know to blame the question and not the one
>> answering it.
>>
>> When we understand that there are some inputs to every TM H that
>> contradict both Boolean return values that H could return then the
>> question: Does your input halt? is essentially a self-contradictory
>> (thus incorrect) question in these cases.
>>
>> The inability to correctly answer an incorrect question places no actual
>> limit on anyone or anything.
>>
>> This insight opens up an alternative treatment of these pathological
>> inputs the same way that ZFC handled Russell's Paradox.
>>
>
> The halting problem proofs merely show that the problem
> definition is unsatisfiable because every H of the infinite
> set of all Turing Machines has an input that makes the
> question: Does your input halt? into a self-contradictory
> thus incorrect question for this H.

I now have two University professors that agree with this.
My words may need some technical improvement...

[problem specification] is unsatisfiable

The idea is to convey the essence of many technical
papers in a single sound bite:

*The halting problem proofs merely show that*
*self-contradictory questions have no correct answer*

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhmpsu$36q2u$9@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11976&group=comp.ai.philosophy#11976

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 16:29:34 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhmpsu$36q2u$9@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uhmh16$2ebu$1@dont-email.me>
<uhmmsr$3ft2$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 23:29:34 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3369054"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhmmsr$3ft2$1@dont-email.me>
 by: Richard Damon - Sun, 29 Oct 2023 23:29 UTC

On 10/29/23 3:38 PM, olcott wrote:
> On 10/29/2023 3:58 PM, olcott wrote:
>> On 10/29/2023 12:30 PM, olcott wrote:
>>> *Everyone agrees that this is impossible*
>>> No computer program H can correctly predict what another computer
>>> program D will do when D has been programmed to do the opposite of
>>> whatever H says.
>>>
>>> H(D) is functional notation that specifies the return value from H(D)
>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>
>>> For all H ∈ TM there exists input D such that
>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>
>>> *No one pays attention to what this impossibility means*
>>> The halting problem is defined as an unsatisfiable specification thus
>>> isomorphic to a question that has been defined to have no correct
>>> answer.
>>>
>>> What time is it (yes or no)?
>>> has no correct answer because there is something wrong with the
>>> question. In this case we know to blame the question and not the one
>>> answering it.
>>>
>>> When we understand that there are some inputs to every TM H that
>>> contradict both Boolean return values that H could return then the
>>> question: Does your input halt? is essentially a self-contradictory
>>> (thus incorrect) question in these cases.
>>>
>>> The inability to correctly answer an incorrect question places no actual
>>> limit on anyone or anything.
>>>
>>> This insight opens up an alternative treatment of these pathological
>>> inputs the same way that ZFC handled Russell's Paradox.
>>>
>>
>> The halting problem proofs merely show that the problem
>> definition is unsatisfiable because every H of the infinite
>> set of all Turing Machines has an input that makes the
>> question: Does your input halt? into a self-contradictory
>> thus incorrect question for this H.
>
> I now have two University professors that agree with this.
> My words may need some technical improvement...
>
> [problem specification] is unsatisfiable
>
> The idea is to convey the essence of many technical
> papers in a single sound bite:
>
> *The halting problem proofs merely show that*
> *self-contradictory questions have no correct answer*
>
>

Anonymous experts are not "evidence" and no "expert" can contradict the
actual definitions.

Especially when you don't even quote the actual words used, since you
have shown youself to misinterprete what they are saying or have used
misleading wording where they will interpret your words to mean what
they are supposed to mean, and not you corrupted meaning.

You are just continuing to prove that you do not understand how logic
works, and by not even trying to refute the rebuttal are accepting them
as correct responses, and thus admitting you are just a stupid liar.

As pointed out, the actual questions DO have answer, so you are just an
unsound liar by your arguements that they do not.

You are just making sure that you name will be MUD for as long as it is
remembered, until it falls in the trash heap of history.

This will also mean that any good ideas you might have had have been
poisoned and worthless.

You have just gas-lighted your self into being just a babbling idiot
that can only repeat the lies he convinced himself of, with no actual
logical backing.

Too bad.

Re: Does the halting problem actually limit what computers can do?

<uhmqnq$40j0$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11977&group=comp.ai.philosophy#11977

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 18:43:54 -0500
Organization: A noiseless patient Spider
Lines: 73
Message-ID: <uhmqnq$40j0$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uhmh16$2ebu$1@dont-email.me>
<uhmmsr$3ft2$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 29 Oct 2023 23:43:54 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="01655030c6df07099b4d908f11be3d86";
logging-data="131680"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19s23pR0wNfINl7wD+btoBr"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:D7Dzo+CEo7AlD7CSlFybRQKksQQ=
In-Reply-To: <uhmmsr$3ft2$1@dont-email.me>
Content-Language: en-US
 by: olcott - Sun, 29 Oct 2023 23:43 UTC

On 10/29/2023 5:38 PM, olcott wrote:
> On 10/29/2023 3:58 PM, olcott wrote:
>> On 10/29/2023 12:30 PM, olcott wrote:
>>> *Everyone agrees that this is impossible*
>>> No computer program H can correctly predict what another computer
>>> program D will do when D has been programmed to do the opposite of
>>> whatever H says.
>>>
>>> H(D) is functional notation that specifies the return value from H(D)
>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>
>>> For all H ∈ TM there exists input D such that
>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>
>>> *No one pays attention to what this impossibility means*
>>> The halting problem is defined as an unsatisfiable specification thus
>>> isomorphic to a question that has been defined to have no correct
>>> answer.
>>>
>>> What time is it (yes or no)?
>>> has no correct answer because there is something wrong with the
>>> question. In this case we know to blame the question and not the one
>>> answering it.
>>>
>>> When we understand that there are some inputs to every TM H that
>>> contradict both Boolean return values that H could return then the
>>> question: Does your input halt? is essentially a self-contradictory
>>> (thus incorrect) question in these cases.
>>>
>>> The inability to correctly answer an incorrect question places no actual
>>> limit on anyone or anything.
>>>
>>> This insight opens up an alternative treatment of these pathological
>>> inputs the same way that ZFC handled Russell's Paradox.
>>>
>>
>> The halting problem proofs merely show that the problem
>> definition is unsatisfiable because every H of the infinite
>> set of all Turing Machines has an input that makes the
>> question: Does your input halt? into a self-contradictory
>> thus incorrect question for this H.
>
> I now have two University professors that agree with this.
> My words may need some technical improvement...
>
> [problem specification] is unsatisfiable
>
> The idea is to convey the essence of many technical
> papers in a single sound bite:
>
> *The halting problem proofs merely show that*
> *self-contradictory questions have no correct answer*

Anonymous experts are not "evidence"
and no "expert" can contradict the
actual definitions.

The whole thing is a matter of these definitions
semantically entailing additional nuances of meaning
that no one ever noticed before.

Computer scientists almost never pay any attention
at all to the philosophical underpinnings of the
foundations of concepts such as undecidability.

All of my related work in the last twenty years
has focused on these foundational underpinnings.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhmu9q$38glv$1@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11978&group=comp.ai.philosophy#11978

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 17:44:42 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhmu9q$38glv$1@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uhmh16$2ebu$1@dont-email.me>
<uhmmsr$3ft2$1@dont-email.me> <uhmqnq$40j0$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 00:44:42 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3424959"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhmqnq$40j0$1@dont-email.me>
 by: Richard Damon - Mon, 30 Oct 2023 00:44 UTC

On 10/29/23 4:43 PM, olcott wrote:
> On 10/29/2023 5:38 PM, olcott wrote:
>> On 10/29/2023 3:58 PM, olcott wrote:
>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>> *Everyone agrees that this is impossible*
>>>> No computer program H can correctly predict what another computer
>>>> program D will do when D has been programmed to do the opposite of
>>>> whatever H says.
>>>>
>>>> H(D) is functional notation that specifies the return value from H(D)
>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>
>>>> For all H ∈ TM there exists input D such that
>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>
>>>> *No one pays attention to what this impossibility means*
>>>> The halting problem is defined as an unsatisfiable specification thus
>>>> isomorphic to a question that has been defined to have no correct
>>>> answer.
>>>>
>>>> What time is it (yes or no)?
>>>> has no correct answer because there is something wrong with the
>>>> question. In this case we know to blame the question and not the one
>>>> answering it.
>>>>
>>>> When we understand that there are some inputs to every TM H that
>>>> contradict both Boolean return values that H could return then the
>>>> question: Does your input halt? is essentially a self-contradictory
>>>> (thus incorrect) question in these cases.
>>>>
>>>> The inability to correctly answer an incorrect question places no
>>>> actual
>>>> limit on anyone or anything.
>>>>
>>>> This insight opens up an alternative treatment of these pathological
>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>
>>>
>>> The halting problem proofs merely show that the problem
>>> definition is unsatisfiable because every H of the infinite
>>> set of all Turing Machines has an input that makes the
>>> question: Does your input halt? into a self-contradictory
>>> thus incorrect question for this H.
>>
>> I now have two University professors that agree with this.
>> My words may need some technical improvement...
>>
>> [problem specification] is unsatisfiable
>>
>> The idea is to convey the essence of many technical
>> papers in a single sound bite:
>>
>> *The halting problem proofs merely show that*
>> *self-contradictory questions have no correct answer*
>
>    Anonymous experts are not "evidence"
>    and no "expert" can contradict the
>    actual definitions.
>
> The whole thing is a matter of these definitions
> semantically entailing additional nuances of meaning
> that no one ever noticed before.

Since you are so bad at the actual definition of words, it seems more
like you are imagining things that aren't there.

If you HAVE found an actual "nuances" that hasn't been noticed before,
maybe if you try an actual step by step proof showing that "nuance".

I don't think you can, and this is just another case of an idiot
shooting at a target that just doesn't exist.

>
> Computer scientists almost never pay any attention
> at all to the philosophical underpinnings of the
> foundations of concepts such as undecidability.

Maybe it is the philosophers that don't understand that undeciability is
a PRECISELY defined quantity.

The thing that you don't seem to understand is that in Formal Systems,
the rules are very important, and the things you are talking about are
well established by those rules.

If you want to change the "Rules" of the system, then you are in a very
real sense needing to START OVER and buid back up from the ground up.

It seems that you are so ignorant, that you don't understand that many
of your "new" ideas are actually existing, but becuase of the discovered
limitations, just parts of fringe systems.

Yes, you can have systems where all true statements are provable, but
the resulting system ends up very limited in scope, and can't be used to
form anything like the mathematics that support things like Computation
Theory.

>
> All of my related work in the last twenty years
> has focused on these foundational underpinnings.
>

And is a pile of rubbish, because you don't actually seem to know what
the thigs actually mean.

Maybe if you were willing to actually LEARN about the systems you want
to talk about, but your stated fear of "Learning error by rote" as put
you in the state of Being in Error by Ignorance.

Your idea of building a system from "First Principles" requires you to
first actually LEARN those "First Principles". And for a "Formal Logic
System" that means at least enough to know all the basic rules and
definitons of the system. Things you have at time just admitted you
never knew, which sort of negates any "First Principle" developement you
might have done.

I will say that many of your errors where known about 100 years ago, so
it shows a glaring hole in your education.

Re: Does the halting problem actually limit what computers can do?

<uhmv1o$4iu2$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11979&group=comp.ai.philosophy#11979

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 19:57:28 -0500
Organization: A noiseless patient Spider
Lines: 89
Message-ID: <uhmv1o$4iu2$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uhmh16$2ebu$1@dont-email.me>
<uhmmsr$3ft2$1@dont-email.me> <uhmqnq$40j0$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 00:57:28 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="01655030c6df07099b4d908f11be3d86";
logging-data="150466"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19+tOsdACOPGALaCMj4O1bi"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:L770xop6066RafAnoD6Fj/Cho9I=
Content-Language: en-US
In-Reply-To: <uhmqnq$40j0$1@dont-email.me>
 by: olcott - Mon, 30 Oct 2023 00:57 UTC

On 10/29/2023 6:43 PM, olcott wrote:
> On 10/29/2023 5:38 PM, olcott wrote:
>> On 10/29/2023 3:58 PM, olcott wrote:
>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>> *Everyone agrees that this is impossible*
>>>> No computer program H can correctly predict what another computer
>>>> program D will do when D has been programmed to do the opposite of
>>>> whatever H says.
>>>>
>>>> H(D) is functional notation that specifies the return value from H(D)
>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>
>>>> For all H ∈ TM there exists input D such that
>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>
>>>> *No one pays attention to what this impossibility means*
>>>> The halting problem is defined as an unsatisfiable specification thus
>>>> isomorphic to a question that has been defined to have no correct
>>>> answer.
>>>>
>>>> What time is it (yes or no)?
>>>> has no correct answer because there is something wrong with the
>>>> question. In this case we know to blame the question and not the one
>>>> answering it.
>>>>
>>>> When we understand that there are some inputs to every TM H that
>>>> contradict both Boolean return values that H could return then the
>>>> question: Does your input halt? is essentially a self-contradictory
>>>> (thus incorrect) question in these cases.
>>>>
>>>> The inability to correctly answer an incorrect question places no
>>>> actual
>>>> limit on anyone or anything.
>>>>
>>>> This insight opens up an alternative treatment of these pathological
>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>
>>>
>>> The halting problem proofs merely show that the problem
>>> definition is unsatisfiable because every H of the infinite
>>> set of all Turing Machines has an input that makes the
>>> question: Does your input halt? into a self-contradictory
>>> thus incorrect question for this H.
>>
>> I now have two University professors that agree with this.
>> My words may need some technical improvement...
>>
>> [problem specification] is unsatisfiable
>>
>> The idea is to convey the essence of many technical
>> papers in a single sound bite:
>>
>> *The halting problem proofs merely show that*
>> *self-contradictory questions have no correct answer*
>
>    Anonymous experts are not "evidence"
>    and no "expert" can contradict the
>    actual definitions.
>
> The whole thing is a matter of these definitions
> semantically entailing additional nuances of meaning
> that no one ever noticed before.
>
> Computer scientists almost never pay any attention
> at all to the philosophical underpinnings of the
> foundations of concepts such as undecidability.
>
> All of my related work in the last twenty years
> has focused on these foundational underpinnings.
>

In the same way that incompleteness is proven whenever
any WFF of a formal system cannot be proven or refuted
in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
SELF-CONTRADICTORY

The notion of undecidability is determined even when the
decider is required to correctly answer a self-contradictory
(thus incorrect) question.

This is the epiphany of my work for the last 20 years and
two professors agree that this does apply to the halting
problem specification.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhmvli$38glv$2@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11980&group=comp.ai.philosophy#11980

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 18:08:02 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhmvli$38glv$2@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uhmh16$2ebu$1@dont-email.me>
<uhmmsr$3ft2$1@dont-email.me> <uhmqnq$40j0$1@dont-email.me>
<uhmv1o$4iu2$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 01:08:03 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3424959"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <uhmv1o$4iu2$1@dont-email.me>
Content-Language: en-US
 by: Richard Damon - Mon, 30 Oct 2023 01:08 UTC

On 10/29/23 5:57 PM, olcott wrote:
> On 10/29/2023 6:43 PM, olcott wrote:
>> On 10/29/2023 5:38 PM, olcott wrote:
>>> On 10/29/2023 3:58 PM, olcott wrote:
>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>> *Everyone agrees that this is impossible*
>>>>> No computer program H can correctly predict what another computer
>>>>> program D will do when D has been programmed to do the opposite of
>>>>> whatever H says.
>>>>>
>>>>> H(D) is functional notation that specifies the return value from H(D)
>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>
>>>>> For all H ∈ TM there exists input D such that
>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>
>>>>> *No one pays attention to what this impossibility means*
>>>>> The halting problem is defined as an unsatisfiable specification thus
>>>>> isomorphic to a question that has been defined to have no correct
>>>>> answer.
>>>>>
>>>>> What time is it (yes or no)?
>>>>> has no correct answer because there is something wrong with the
>>>>> question. In this case we know to blame the question and not the one
>>>>> answering it.
>>>>>
>>>>> When we understand that there are some inputs to every TM H that
>>>>> contradict both Boolean return values that H could return then the
>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>> (thus incorrect) question in these cases.
>>>>>
>>>>> The inability to correctly answer an incorrect question places no
>>>>> actual
>>>>> limit on anyone or anything.
>>>>>
>>>>> This insight opens up an alternative treatment of these pathological
>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>
>>>>
>>>> The halting problem proofs merely show that the problem
>>>> definition is unsatisfiable because every H of the infinite
>>>> set of all Turing Machines has an input that makes the
>>>> question: Does your input halt? into a self-contradictory
>>>> thus incorrect question for this H.
>>>
>>> I now have two University professors that agree with this.
>>> My words may need some technical improvement...
>>>
>>> [problem specification] is unsatisfiable
>>>
>>> The idea is to convey the essence of many technical
>>> papers in a single sound bite:
>>>
>>> *The halting problem proofs merely show that*
>>> *self-contradictory questions have no correct answer*
>>
>>     Anonymous experts are not "evidence"
>>     and no "expert" can contradict the
>>     actual definitions.
>>
>> The whole thing is a matter of these definitions
>> semantically entailing additional nuances of meaning
>> that no one ever noticed before.
>>
>> Computer scientists almost never pay any attention
>> at all to the philosophical underpinnings of the
>> foundations of concepts such as undecidability.
>>
>> All of my related work in the last twenty years
>> has focused on these foundational underpinnings.
>>
>
> In the same way that incompleteness is proven whenever
> any WFF of a formal system cannot be proven or refuted
> in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
> SELF-CONTRADICTORY

Except it isn't, becuase you don't understand the logic.

>
> The notion of undecidability is determined even when the
> decider is required to correctly answer a self-contradictory
> (thus incorrect) question.

Which it isn't, and you don't understand the term.

>
> This is the epiphany of my work for the last 20 years and
> two professors agree that this does apply to the halting
> problem specification.
>

yes, your "epiphany" is just your delusion from stupidity.

You have PROVEN you don't understand a thing about what you are talking
about and thus prove yourself a liar.

As I mentioned, if you really think you have something, try to actually
show it with a real formal proof starting from the actual accepted
definitions.

Your problem seems to be that you just don't understand the fields well
enough to know what you can actually start with, or logic enough to
actually form a real logical proof.

Your just repeating your INCORRECT claims, just proves that you have
gas-light yourself into beliving your lies, and that you actually have
nothing to base your work on, except your own stupid lies.

Re: Does the halting problem actually limit what computers can do?

<uhn0au$4qnc$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11981&group=comp.ai.philosophy#11981

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 20:19:26 -0500
Organization: A noiseless patient Spider
Lines: 104
Message-ID: <uhn0au$4qnc$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uhmh16$2ebu$1@dont-email.me>
<uhmmsr$3ft2$1@dont-email.me> <uhmqnq$40j0$1@dont-email.me>
<uhmv1o$4iu2$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 01:19:26 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="01655030c6df07099b4d908f11be3d86";
logging-data="158444"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/EOY+eXVDxNqkbRENxGznk"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:17QiT1qtebcokkO6W3e7Qxkh+z8=
In-Reply-To: <uhmv1o$4iu2$1@dont-email.me>
Content-Language: en-US
 by: olcott - Mon, 30 Oct 2023 01:19 UTC

On 10/29/2023 7:57 PM, olcott wrote:
> On 10/29/2023 6:43 PM, olcott wrote:
>> On 10/29/2023 5:38 PM, olcott wrote:
>>> On 10/29/2023 3:58 PM, olcott wrote:
>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>> *Everyone agrees that this is impossible*
>>>>> No computer program H can correctly predict what another computer
>>>>> program D will do when D has been programmed to do the opposite of
>>>>> whatever H says.
>>>>>
>>>>> H(D) is functional notation that specifies the return value from H(D)
>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>
>>>>> For all H ∈ TM there exists input D such that
>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>
>>>>> *No one pays attention to what this impossibility means*
>>>>> The halting problem is defined as an unsatisfiable specification thus
>>>>> isomorphic to a question that has been defined to have no correct
>>>>> answer.
>>>>>
>>>>> What time is it (yes or no)?
>>>>> has no correct answer because there is something wrong with the
>>>>> question. In this case we know to blame the question and not the one
>>>>> answering it.
>>>>>
>>>>> When we understand that there are some inputs to every TM H that
>>>>> contradict both Boolean return values that H could return then the
>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>> (thus incorrect) question in these cases.
>>>>>
>>>>> The inability to correctly answer an incorrect question places no
>>>>> actual
>>>>> limit on anyone or anything.
>>>>>
>>>>> This insight opens up an alternative treatment of these pathological
>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>
>>>>
>>>> The halting problem proofs merely show that the problem
>>>> definition is unsatisfiable because every H of the infinite
>>>> set of all Turing Machines has an input that makes the
>>>> question: Does your input halt? into a self-contradictory
>>>> thus incorrect question for this H.
>>>
>>> I now have two University professors that agree with this.
>>> My words may need some technical improvement...
>>>
>>> [problem specification] is unsatisfiable
>>>
>>> The idea is to convey the essence of many technical
>>> papers in a single sound bite:
>>>
>>> *The halting problem proofs merely show that*
>>> *self-contradictory questions have no correct answer*
>>
>>     Anonymous experts are not "evidence"
>>     and no "expert" can contradict the
>>     actual definitions.
>>
>> The whole thing is a matter of these definitions
>> semantically entailing additional nuances of meaning
>> that no one ever noticed before.
>>
>> Computer scientists almost never pay any attention
>> at all to the philosophical underpinnings of the
>> foundations of concepts such as undecidability.
>>
>> All of my related work in the last twenty years
>> has focused on these foundational underpinnings.
>>
>
> In the same way that incompleteness is proven whenever
> any WFF of a formal system cannot be proven or refuted
> in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
> SELF-CONTRADICTORY
>
> The notion of undecidability is determined even when the
> decider is required to correctly answer a self-contradictory
> (thus incorrect) question.
>
> This is the epiphany of my work for the last 20 years and
> two professors agree that this does apply to the halting
> problem specification.
>

I cannot form a proof on the basis of the conventional
definitions because the issue is that one of these
definitions semantically entails more meaning than
anyone ever noticed before.

That this applies generically to the notion of undecidability
seems to be an extension of these sames ideas that these
professors only applied to the halting problem specification.

The lead of these two professors and I exchanged fifty emails
where he confirmed my verbatim paraphrase of his ideas using
my own terms such as "incorrect questions".

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhn1cn$38glu$1@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11982&group=comp.ai.philosophy#11982

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 18:37:27 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhn1cn$38glu$1@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uhmh16$2ebu$1@dont-email.me>
<uhmmsr$3ft2$1@dont-email.me> <uhmqnq$40j0$1@dont-email.me>
<uhmv1o$4iu2$1@dont-email.me> <uhn0au$4qnc$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 01:37:28 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3424958"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <uhn0au$4qnc$1@dont-email.me>
Content-Language: en-US
 by: Richard Damon - Mon, 30 Oct 2023 01:37 UTC

On 10/29/23 6:19 PM, olcott wrote:
> On 10/29/2023 7:57 PM, olcott wrote:
>> On 10/29/2023 6:43 PM, olcott wrote:
>>> On 10/29/2023 5:38 PM, olcott wrote:
>>>> On 10/29/2023 3:58 PM, olcott wrote:
>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>> *Everyone agrees that this is impossible*
>>>>>> No computer program H can correctly predict what another computer
>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>> whatever H says.
>>>>>>
>>>>>> H(D) is functional notation that specifies the return value from H(D)
>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>
>>>>>> For all H ∈ TM there exists input D such that
>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>
>>>>>> *No one pays attention to what this impossibility means*
>>>>>> The halting problem is defined as an unsatisfiable specification thus
>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>> answer.
>>>>>>
>>>>>> What time is it (yes or no)?
>>>>>> has no correct answer because there is something wrong with the
>>>>>> question. In this case we know to blame the question and not the one
>>>>>> answering it.
>>>>>>
>>>>>> When we understand that there are some inputs to every TM H that
>>>>>> contradict both Boolean return values that H could return then the
>>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>>> (thus incorrect) question in these cases.
>>>>>>
>>>>>> The inability to correctly answer an incorrect question places no
>>>>>> actual
>>>>>> limit on anyone or anything.
>>>>>>
>>>>>> This insight opens up an alternative treatment of these pathological
>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>
>>>>>
>>>>> The halting problem proofs merely show that the problem
>>>>> definition is unsatisfiable because every H of the infinite
>>>>> set of all Turing Machines has an input that makes the
>>>>> question: Does your input halt? into a self-contradictory
>>>>> thus incorrect question for this H.
>>>>
>>>> I now have two University professors that agree with this.
>>>> My words may need some technical improvement...
>>>>
>>>> [problem specification] is unsatisfiable
>>>>
>>>> The idea is to convey the essence of many technical
>>>> papers in a single sound bite:
>>>>
>>>> *The halting problem proofs merely show that*
>>>> *self-contradictory questions have no correct answer*
>>>
>>>     Anonymous experts are not "evidence"
>>>     and no "expert" can contradict the
>>>     actual definitions.
>>>
>>> The whole thing is a matter of these definitions
>>> semantically entailing additional nuances of meaning
>>> that no one ever noticed before.
>>>
>>> Computer scientists almost never pay any attention
>>> at all to the philosophical underpinnings of the
>>> foundations of concepts such as undecidability.
>>>
>>> All of my related work in the last twenty years
>>> has focused on these foundational underpinnings.
>>>
>>
>> In the same way that incompleteness is proven whenever
>> any WFF of a formal system cannot be proven or refuted
>> in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
>> SELF-CONTRADICTORY
>>
>> The notion of undecidability is determined even when the
>> decider is required to correctly answer a self-contradictory
>> (thus incorrect) question.
>>
>> This is the epiphany of my work for the last 20 years and
>> two professors agree that this does apply to the halting
>> problem specification.
>>
>
> I cannot form a proof on the basis of the conventional
> definitions because the issue is that one of these
> definitions semantically entails more meaning than
> anyone ever noticed before.

Then you are admtting that you can't do the work in the formal system,
so any claim you make about anything IN the system is just invalid.

IF you want to try to change the definitions, you need to just redrive
the system from the ground up with your new rules. (I doubt you can do
that).

Or, you could try to get some help by trying to clearly explain the
error in the fundamental rules you think are wrong.

Note, to do that you need to actually show the real problem that the
rule is causing.

Your idea that undecidable problem are actually invalid isn't going to
fly, as many of the undecidable problems are actually quite important.

The fact that you can't understand that, means you are going to have a
hard time convincing others or your ideas.

>
> That this applies generically to the notion of undecidability
> seems to be an extension of these sames ideas that these
> professors only applied to the halting problem specification.

You have very bad professors if they only apply "undeciability" to just
the Halting Problem, as MANY problems are "undecidable".

>
> The lead of these two professors and I exchanged fifty emails
> where he confirmed my verbatim paraphrase of his ideas using
> my own terms such as "incorrect questions".
>

And, until your provide the names and actual statements, this claim is
worth exactly NOTHING.

Re: Does the halting problem actually limit what computers can do?

<uhn1pm$525d$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=11983&group=comp.ai.philosophy#11983

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Sun, 29 Oct 2023 20:44:22 -0500
Organization: A noiseless patient Spider
Lines: 119
Message-ID: <uhn1pm$525d$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uhmh16$2ebu$1@dont-email.me>
<uhmmsr$3ft2$1@dont-email.me> <uhmqnq$40j0$1@dont-email.me>
<uhmv1o$4iu2$1@dont-email.me> <uhn0au$4qnc$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 01:44:22 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="01655030c6df07099b4d908f11be3d86";
logging-data="166061"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/QBMjpZLYHmynaK//dcQnO"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:86ve+cvTDyyvQp11DiK4gfPsk94=
Content-Language: en-US
In-Reply-To: <uhn0au$4qnc$1@dont-email.me>
 by: olcott - Mon, 30 Oct 2023 01:44 UTC

On 10/29/2023 8:19 PM, olcott wrote:
> On 10/29/2023 7:57 PM, olcott wrote:
>> On 10/29/2023 6:43 PM, olcott wrote:
>>> On 10/29/2023 5:38 PM, olcott wrote:
>>>> On 10/29/2023 3:58 PM, olcott wrote:
>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>> *Everyone agrees that this is impossible*
>>>>>> No computer program H can correctly predict what another computer
>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>> whatever H says.
>>>>>>
>>>>>> H(D) is functional notation that specifies the return value from H(D)
>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>
>>>>>> For all H ∈ TM there exists input D such that
>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>
>>>>>> *No one pays attention to what this impossibility means*
>>>>>> The halting problem is defined as an unsatisfiable specification thus
>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>> answer.
>>>>>>
>>>>>> What time is it (yes or no)?
>>>>>> has no correct answer because there is something wrong with the
>>>>>> question. In this case we know to blame the question and not the one
>>>>>> answering it.
>>>>>>
>>>>>> When we understand that there are some inputs to every TM H that
>>>>>> contradict both Boolean return values that H could return then the
>>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>>> (thus incorrect) question in these cases.
>>>>>>
>>>>>> The inability to correctly answer an incorrect question places no
>>>>>> actual
>>>>>> limit on anyone or anything.
>>>>>>
>>>>>> This insight opens up an alternative treatment of these pathological
>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>
>>>>>
>>>>> The halting problem proofs merely show that the problem
>>>>> definition is unsatisfiable because every H of the infinite
>>>>> set of all Turing Machines has an input that makes the
>>>>> question: Does your input halt? into a self-contradictory
>>>>> thus incorrect question for this H.
>>>>
>>>> I now have two University professors that agree with this.
>>>> My words may need some technical improvement...
>>>>
>>>> [problem specification] is unsatisfiable
>>>>
>>>> The idea is to convey the essence of many technical
>>>> papers in a single sound bite:
>>>>
>>>> *The halting problem proofs merely show that*
>>>> *self-contradictory questions have no correct answer*
>>>
>>>     Anonymous experts are not "evidence"
>>>     and no "expert" can contradict the
>>>     actual definitions.
>>>
>>> The whole thing is a matter of these definitions
>>> semantically entailing additional nuances of meaning
>>> that no one ever noticed before.
>>>
>>> Computer scientists almost never pay any attention
>>> at all to the philosophical underpinnings of the
>>> foundations of concepts such as undecidability.
>>>
>>> All of my related work in the last twenty years
>>> has focused on these foundational underpinnings.
>>>
>>
>> In the same way that incompleteness is proven whenever
>> any WFF of a formal system cannot be proven or refuted
>> in this formal system EVEN WHEN THE WFF IS SEMANTICALLY
>> SELF-CONTRADICTORY
>>
>> The notion of undecidability is determined even when the
>> decider is required to correctly answer a self-contradictory
>> (thus incorrect) question.
>>
>> This is the epiphany of my work for the last 20 years and
>> two professors agree that this does apply to the halting
>> problem specification.
>>
>
> I cannot form a proof on the basis of the conventional
> definitions because the issue is that one of these
> definitions semantically entails more meaning than
> anyone ever noticed before.
>
> That this applies generically to the notion of undecidability
> seems to be an extension of these sames ideas that these
> professors only applied to the halting problem specification.
>
> The lead of these two professors and I exchanged fifty emails
> where he confirmed my verbatim paraphrase of his ideas using
> my own terms such as "incorrect questions".
>

Then you are admtting that you can't do the
work in the formal system, so any claim you
make about anything IN the system is just invalid.

That the "term undecidability" semantically entails
previously unnoticed nuances of meaning can be understood
on the basis of the reasoning of myself and these two professors.

Just like incompleteness includes self-contradictory
expressions in its measure of incompleteness, undecidability
includes problem specifications that entail self-contradictory
questions. IF YOU WEREN'T STUCK IN REBUTTAL MODE YOU MIGHT SEE THIS

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Pages:123
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor