Rocksolid Light

Welcome to RetroBBS

mail  files  register  newsreader  groups  login

Message-ID:  

The idea of male and female are universal constants. -- Kirk, "Metamorphosis", stardate 3219.8


computers / comp.ai.philosophy / Re: Does the halting problem actually limit what computers can do?

SubjectAuthor
* Does the halting problem actually limit what computers can do?olcott
+- Re: Does the halting problem actually limit what computers can do?Richard Damon
+- Re: Does the halting problem actually limit what computers can do?Jim Burns
+* Re: Does the halting problem actually limit what computers can do?olcott
|`- Re: Does the halting problem actually limit what computers can do?Richard Damon
+- Re: Does the halting problem actually limit what computers can do?Richard Damon
+* Re: Does the halting problem actually limit what computers can do?olcott
|+- Re: Does the halting problem actually limit what computers can do?Richard Damon
|`* Re: Does the halting problem actually limit what computers can do?olcott
| `- Re: Does the halting problem actually limit what computers can do?Richard Damon
+* Re: Does the halting problem actually limit what computers can do?olcott
|`- Re: Does the halting problem actually limit what computers can do?Richard Damon
+* Re: Does the halting problem actually limit what computers can do?olcott
|`- Re: Does the halting problem actually limit what computers can do?Richard Damon
+* Re: Does the halting problem actually limit what computers can do?olcott
|+- Re: Does the halting problem actually limit what computers can do?Richard Damon
|`* Re: Does the halting problem actually limit what computers can do?olcott
| +- Re: Does the halting problem actually limit what computers can do?Richard Damon
| `* Re: Does the halting problem actually limit what computers can do?olcott
|  +- Re: Does the halting problem actually limit what computers can do?Richard Damon
|  `* Re: Does the halting problem actually limit what computers can do?olcott
|   +- Re: Does the halting problem actually limit what computers can do?Richard Damon
|   `* Re: Does the halting problem actually limit what computers can do?olcott
|    +- Re: Does the halting problem actually limit what computers can do?Richard Damon
|    `* Re: Does the halting problem actually limit what computers can do?olcott
|     +* Re: Does the halting problem actually limit what computers can do?Richard Damon
|     |`- Re: Does the halting problem actually limit what computers can do?Richard Damon
|     `* Re: Does the halting problem actually limit what computers can do?olcott
|      +- Re: Does the halting problem actually limit what computers can do?Richard Damon
|      `* Re: Does the halting problem actually limit what computers can do?olcott
|       +- Re: Does the halting problem actually limit what computers can do?Richard Damon
|       `* Re: Does the halting problem actually limit what computers can do?olcott
|        +- Re: Does the halting problem actually limit what computers can do?Richard Damon
|        +* Re: Does the halting problem actually limit what computers can do?olcott
|        |`- Re: Does the halting problem actually limit what computers can do?Richard Damon
|        +* Re: Does the halting problem actually limit what computers can do?olcott
|        |`- Re: Does the halting problem actually limit what computers can do?Don Stockbauer
|        +* Re: Does the halting problem actually limit what computers can do?olcott
|        |`- Re: Does the halting problem actually limit what computers can do?Richard Damon
|        `* Re: Does the halting problem actually limit what computers can do?olcott
|         `- Re: Does the halting problem actually limit what computers can do?Richard Damon
`* Re: Does the halting problem actually limit what computers can do?olcott
 +- Re: Does the halting problem actually limit what computers can do?Richard Damon
 `* Re: Does the halting problem actually limit what computers can do?olcott
  +- Re: Does the halting problem actually limit what computers can do?Richard Damon
  `* Re: Does the halting problem actually limit what computers can do?olcott
   +- Re: Does the halting problem actually limit what computers can do?Richard Damon
   `* Re: Does the halting problem actually limit what computers can do?olcott
    +- Re: Does the halting problem actually limit what computers can do?Richard Damon
    `* Re: Does the halting problem actually limit what computers can do?olcott
     +- Re: Does the halting problem actually limit what computers can do?Richard Damon
     `* Re: Does the halting problem actually limit what computers can do?olcott
      +- Re: Does the halting problem actually limit what computers can do?Richard Damon
      `* Re: Does the halting problem actually limit what computers can do?olcott
       +- Re: Does the halting problem actually limit what computers can do?Richard Damon
       `* Re: Does the halting problem actually limit what computers can do?olcott
        +- Re: Does the halting problem actually limit what computers can do?Richard Damon
        `* Re: Does the halting problem actually limit what computers can do?olcott
         +- Re: Does the halting problem actually limit what computers can do?Richard Damon
         `* Re: Does the halting problem actually limit what computers can do?olcott
          `* Re: Does the halting problem actually limit what computers can do?Richard Damon
           `- Re: Does the halting problem actually limit what computers can do?Don Stockbauer

Pages:123
Re: Does the halting problem actually limit what computers can do?

<uhp8lj$3b08n$3@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=12010&group=comp.ai.philosophy#12010

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 14:53:56 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhp8lj$3b08n$3@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 21:53:56 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3506455"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <uhp2m7$k1ls$1@dont-email.me>
Content-Language: en-US
 by: Richard Damon - Mon, 30 Oct 2023 21:53 UTC

On 10/30/23 1:11 PM, olcott wrote:
> On 10/30/2023 1:08 PM, olcott wrote:
>> On 10/30/2023 12:23 PM, olcott wrote:
>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>> *Everyone agrees that this is impossible*
>>>>>> No computer program H can correctly predict what another computer
>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>> whatever H says.
>>>>>>
>>>>>> H(D) is functional notation that specifies the return value from H(D)
>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>
>>>>>> For all H ∈ TM there exists input D such that
>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>
>>>>>> *No one pays attention to what this impossibility means*
>>>>>> The halting problem is defined as an unsatisfiable specification thus
>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>> answer.
>>>>>>
>>>>>> What time is it (yes or no)?
>>>>>> has no correct answer because there is something wrong with the
>>>>>> question. In this case we know to blame the question and not the one
>>>>>> answering it.
>>>>>>
>>>>>> When we understand that there are some inputs to every TM H that
>>>>>> contradict both Boolean return values that H could return then the
>>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>>> (thus incorrect) question in these cases.
>>>>>>
>>>>>> The inability to correctly answer an incorrect question places no
>>>>>> actual
>>>>>> limit on anyone or anything.
>>>>>>
>>>>>> This insight opens up an alternative treatment of these pathological
>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>
>>>>>
>>>>> *The halting problem proofs merely show that*
>>>>> *self-contradictory questions have no correct answer*
>>>>>
>>>>> *A self-contradictory question is defined as*
>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>
>>>>> For every H in the set of all Turing Machines there exists a D
>>>>> that derives a self-contradictory question for this H in that
>>>>> (a) If this H says that its D will halt, D loops
>>>>> (b) If this H that says its D will loop it halts.
>>>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>>
>>>> *proving that this is literally true*
>>>> *The halting problem proofs merely show that*
>>>> *self-contradictory questions have no correct answer*
>>>>
>>>
>>>     Nope, since each specific question HAS
>>>     a correct answer, it shows that, by your
>>>     own definition, it isn't "Self-Contradictory"
>>>
>>> *That is a deliberate strawman deception paraphrase*
>>> *That is a deliberate strawman deception paraphrase*
>>> *That is a deliberate strawman deception paraphrase*
>>>
>>> There does not exist a solution to the halting problem because
>>> *for every Turing Machine of the infinite set of all Turing machines*
>>> *for every Turing Machine of the infinite set of all Turing machines*
>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>
>>> there exists a D that makes the question:
>>> Does your input halt?
>>> a self-contradictory thus incorrect question.
>>
>>     Where does it say that a Turing
>>     Machine must exsit to do it?
>>
>> *The only reason that no such Turing Machine exists is*
>>
>> For every H in the set of all Turing Machines there exists a D
>> that derives a self-contradictory question for this H in that
>> (a) If this H says that its D will halt, D loops
>> (b) If this H that says its D will loop it halts.
>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>
>> *therefore*
>>
>> *The halting problem proofs merely show that*
>> *self-contradictory questions have no correct answer*
>
>    The issue that you ignore is that you are
>    confalting a set of questions with a question,
>    and are baseing your logic on a strawman,
>
> It is not my mistake. Linguists understand that the
> context of who is asked a question changes the meaning
> of the question.

It *CAN* if the question ask something about the person being questioned.

But it *CAN'T* if the question doesn't in any way reffer to who you ask.

If you ask what is 1 + 2? it doesn't matter who you ask, the answer is
always 3.

If you ask what is the third planet around the star "Sol", the answer is
always Earth.

If you ask "if the specific program D, based on the specific program H
that when invoked as H(D,D) will return false, when invoked as D(D) will
Halt?", the answer is always True.

If you ask "if the specific program D, based on the specific program H
that when invoked as H(D,D) will return True, when invoked as D(D) will
Halt?", the answer is always False.

Thus, since any instance of the halting problem in your set is one of
those last two questions, there is alwaysa correct answer to the
question, so THAT question is not "Contradictory", as "Contradictory
Questions" never have a correct answer. (from your own definition)

>
> This can easily be shown to apply to decision problem
> instances as follows:
>
> In that H.true and H.false are the wrong answer when
> D calls H to do the opposite of whatever value that
> either H returns.

Nope, because H CAN only go to one of H.false or H.true based on its
programing.

THAT being the wrong answer doesn't make the problem invalid.

You are just DECEPTIVELY assuming a property of H that just doesn't exist.

Note, An H1 that goes to H1.true when given D1 is a different program
AND a different input then the H2 that goes to H2.false when given D2

So, you can't treat this as the same question to try to show that you
have a contradiction.

>
> Whereas exactly one of H1.true or H1.false is correct
> for this exact same D.

Yes, ONE of the answers would have been correct for the D given.

It will be the one that the H that D was built on didn't go to.

THAT is valid, and results in a valid question.

>
> This proves that the question: "Does your input halt?"
> has a different meaning across the H and H1 pairs.
>

Nope. Remember, to compare the questions "Does your input Halt?" you
need to given them the exact same input,

A given input D is built on one SPECIFIC H, not whatever H we are giving
the input to.

Remember also, the ACTUAL question is: "Does the machine represented by
your input Halt?" The D in the input has specific behavior, and thus the
actual answer for does D(D) Halt is defined, and the same for all
deciders given this exact same D.

Note, this is why Linz uses the ^ notation. Given a decider by what ever
name, we can make the ^ program from it, thus H1 is given H1^, H2 is
given H2^ and thus it is clear that each different decider has a
different input.

You are just being deceptive trying to call all the different inputs by
the same name. That or you are just too dumb to understand the error in
doing so,

I will challange you to write an actual program that meets the
requirements of a computation that can change its behavior based on who
is deciding on it.

Note, with your example "H/D" program, the H that D calls is part of the
definition of D, so when you give it to a decider, you need to give to
said decider.

You are just proving your utter stupidity by repeating factually
incorrect claims with no backing other than the clearly flawed reasoning.

If you want to show my reasoning is incorrect, quote the message and
show the actual logical error in the statement (not just that you think
it is wrong)

Re: Does the halting problem actually limit what computers can do?

<uhp9k7$l6kb$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=12011&group=comp.ai.philosophy#12011

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 17:10:15 -0500
Organization: A noiseless patient Spider
Lines: 125
Message-ID: <uhp9k7$l6kb$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 22:10:15 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="01655030c6df07099b4d908f11be3d86";
logging-data="694923"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1+JcbeKWUdcvKQWnUc73eIC"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:1LFXdCw4gI72FdcSPJkFuyEWozk=
In-Reply-To: <uhp2m7$k1ls$1@dont-email.me>
Content-Language: en-US
 by: olcott - Mon, 30 Oct 2023 22:10 UTC

On 10/30/2023 3:11 PM, olcott wrote:
> On 10/30/2023 1:08 PM, olcott wrote:
>> On 10/30/2023 12:23 PM, olcott wrote:
>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>> *Everyone agrees that this is impossible*
>>>>>> No computer program H can correctly predict what another computer
>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>> whatever H says.
>>>>>>
>>>>>> H(D) is functional notation that specifies the return value from H(D)
>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>
>>>>>> For all H ∈ TM there exists input D such that
>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>
>>>>>> *No one pays attention to what this impossibility means*
>>>>>> The halting problem is defined as an unsatisfiable specification thus
>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>> answer.
>>>>>>
>>>>>> What time is it (yes or no)?
>>>>>> has no correct answer because there is something wrong with the
>>>>>> question. In this case we know to blame the question and not the one
>>>>>> answering it.
>>>>>>
>>>>>> When we understand that there are some inputs to every TM H that
>>>>>> contradict both Boolean return values that H could return then the
>>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>>> (thus incorrect) question in these cases.
>>>>>>
>>>>>> The inability to correctly answer an incorrect question places no
>>>>>> actual
>>>>>> limit on anyone or anything.
>>>>>>
>>>>>> This insight opens up an alternative treatment of these pathological
>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>
>>>>>
>>>>> *The halting problem proofs merely show that*
>>>>> *self-contradictory questions have no correct answer*
>>>>>
>>>>> *A self-contradictory question is defined as*
>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>
>>>>> For every H in the set of all Turing Machines there exists a D
>>>>> that derives a self-contradictory question for this H in that
>>>>> (a) If this H says that its D will halt, D loops
>>>>> (b) If this H that says its D will loop it halts.
>>>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>>
>>>> *proving that this is literally true*
>>>> *The halting problem proofs merely show that*
>>>> *self-contradictory questions have no correct answer*
>>>>
>>>
>>>     Nope, since each specific question HAS
>>>     a correct answer, it shows that, by your
>>>     own definition, it isn't "Self-Contradictory"
>>>
>>> *That is a deliberate strawman deception paraphrase*
>>> *That is a deliberate strawman deception paraphrase*
>>> *That is a deliberate strawman deception paraphrase*
>>>
>>> There does not exist a solution to the halting problem because
>>> *for every Turing Machine of the infinite set of all Turing machines*
>>> *for every Turing Machine of the infinite set of all Turing machines*
>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>
>>> there exists a D that makes the question:
>>> Does your input halt?
>>> a self-contradictory thus incorrect question.
>>
>>     Where does it say that a Turing
>>     Machine must exsit to do it?
>>
>> *The only reason that no such Turing Machine exists is*
>>
>> For every H in the set of all Turing Machines there exists a D
>> that derives a self-contradictory question for this H in that
>> (a) If this H says that its D will halt, D loops
>> (b) If this H that says its D will loop it halts.
>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>
>> *therefore*
>>
>> *The halting problem proofs merely show that*
>> *self-contradictory questions have no correct answer*
>
>    The issue that you ignore is that you are
>    confalting a set of questions with a question,
>    and are baseing your logic on a strawman,
>
> It is not my mistake. Linguists understand that the
> context of who is asked a question changes the meaning
> of the question.
>
> This can easily be shown to apply to decision problem
> instances as follows:
>
> In that H.true and H.false are the wrong answer when
> D calls H to do the opposite of whatever value that
> either H returns.
>
> Whereas exactly one of H1.true or H1.false is correct
> for this exact same D.
>
> This proves that the question: "Does your input halt?"
> has a different meaning across the H and H1 pairs.

It *CAN* if the question ask something about
the person being questioned.

But it *CAN'T* if the question doesn't in any
way reffer to who you ask.

D calls H thus D DOES refer to H
D does not call H1 therefore D does not refer to H1

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhpajk$3b08n$4@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=12012&group=comp.ai.philosophy#12012

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 15:27:01 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhpajk$3b08n$4@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 22:27:00 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3506455"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <uhp9k7$l6kb$1@dont-email.me>
Content-Language: en-US
 by: Richard Damon - Mon, 30 Oct 2023 22:27 UTC

On 10/30/23 3:10 PM, olcott wrote:
> On 10/30/2023 3:11 PM, olcott wrote:
>> On 10/30/2023 1:08 PM, olcott wrote:
>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>> *Everyone agrees that this is impossible*
>>>>>>> No computer program H can correctly predict what another computer
>>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>>> whatever H says.
>>>>>>>
>>>>>>> H(D) is functional notation that specifies the return value from
>>>>>>> H(D)
>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>
>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>
>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>> The halting problem is defined as an unsatisfiable specification
>>>>>>> thus
>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>> answer.
>>>>>>>
>>>>>>> What time is it (yes or no)?
>>>>>>> has no correct answer because there is something wrong with the
>>>>>>> question. In this case we know to blame the question and not the one
>>>>>>> answering it.
>>>>>>>
>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>> contradict both Boolean return values that H could return then the
>>>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>>>> (thus incorrect) question in these cases.
>>>>>>>
>>>>>>> The inability to correctly answer an incorrect question places no
>>>>>>> actual
>>>>>>> limit on anyone or anything.
>>>>>>>
>>>>>>> This insight opens up an alternative treatment of these pathological
>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>
>>>>>>
>>>>>> *The halting problem proofs merely show that*
>>>>>> *self-contradictory questions have no correct answer*
>>>>>>
>>>>>> *A self-contradictory question is defined as*
>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>
>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>> that derives a self-contradictory question for this H in that
>>>>>> (a) If this H says that its D will halt, D loops
>>>>>> (b) If this H that says its D will loop it halts.
>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>> each H*
>>>>>
>>>>> *proving that this is literally true*
>>>>> *The halting problem proofs merely show that*
>>>>> *self-contradictory questions have no correct answer*
>>>>>
>>>>
>>>>     Nope, since each specific question HAS
>>>>     a correct answer, it shows that, by your
>>>>     own definition, it isn't "Self-Contradictory"
>>>>
>>>> *That is a deliberate strawman deception paraphrase*
>>>> *That is a deliberate strawman deception paraphrase*
>>>> *That is a deliberate strawman deception paraphrase*
>>>>
>>>> There does not exist a solution to the halting problem because
>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>
>>>> there exists a D that makes the question:
>>>> Does your input halt?
>>>> a self-contradictory thus incorrect question.
>>>
>>>     Where does it say that a Turing
>>>     Machine must exsit to do it?
>>>
>>> *The only reason that no such Turing Machine exists is*
>>>
>>> For every H in the set of all Turing Machines there exists a D
>>> that derives a self-contradictory question for this H in that
>>> (a) If this H says that its D will halt, D loops
>>> (b) If this H that says its D will loop it halts.
>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>
>>> *therefore*
>>>
>>> *The halting problem proofs merely show that*
>>> *self-contradictory questions have no correct answer*
>>
>>     The issue that you ignore is that you are
>>     confalting a set of questions with a question,
>>     and are baseing your logic on a strawman,
>>
>> It is not my mistake. Linguists understand that the
>> context of who is asked a question changes the meaning
>> of the question.
>>
>> This can easily be shown to apply to decision problem
>> instances as follows:
>>
>> In that H.true and H.false are the wrong answer when
>> D calls H to do the opposite of whatever value that
>> either H returns.
>>
>> Whereas exactly one of H1.true or H1.false is correct
>> for this exact same D.
>>
>> This proves that the question: "Does your input halt?"
>> has a different meaning across the H and H1 pairs.
>
>    It *CAN* if the question ask something about
>    the person being questioned.
>
>    But it *CAN'T* if the question doesn't in any
>    way reffer to who you ask.
>
> D calls H thus D DOES refer to H
> D does not call H1 therefore D does not refer to H1
>

The QUESTION doesn't refer to the person being asked?

That D calls H doesn't REFER to the asker, but to a specific machine.

Thus, nothing in the question refers to the asker.

Does "What is Joe Blows age?" depend on who you are asking? Even if you
are asking Joe Blow?

NO.

So, you are just continuing to prove your stupidity.

Re: Does the halting problem actually limit what computers can do?

<uhpbot$lib1$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=12013&group=comp.ai.philosophy#12013

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 17:46:52 -0500
Organization: A noiseless patient Spider
Lines: 151
Message-ID: <uhpbot$lib1$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 22:46:53 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="01655030c6df07099b4d908f11be3d86";
logging-data="706913"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1//0kCU4TA/45kQVMSBuvFh"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:/iHNZF5LKxBlA7BB5vS6u4+Va6U=
Content-Language: en-US
In-Reply-To: <uhp9k7$l6kb$1@dont-email.me>
 by: olcott - Mon, 30 Oct 2023 22:46 UTC

On 10/30/2023 5:10 PM, olcott wrote:
> On 10/30/2023 3:11 PM, olcott wrote:
>> On 10/30/2023 1:08 PM, olcott wrote:
>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>> *Everyone agrees that this is impossible*
>>>>>>> No computer program H can correctly predict what another computer
>>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>>> whatever H says.
>>>>>>>
>>>>>>> H(D) is functional notation that specifies the return value from
>>>>>>> H(D)
>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not halt
>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>
>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>
>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>> The halting problem is defined as an unsatisfiable specification
>>>>>>> thus
>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>> answer.
>>>>>>>
>>>>>>> What time is it (yes or no)?
>>>>>>> has no correct answer because there is something wrong with the
>>>>>>> question. In this case we know to blame the question and not the one
>>>>>>> answering it.
>>>>>>>
>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>> contradict both Boolean return values that H could return then the
>>>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>>>> (thus incorrect) question in these cases.
>>>>>>>
>>>>>>> The inability to correctly answer an incorrect question places no
>>>>>>> actual
>>>>>>> limit on anyone or anything.
>>>>>>>
>>>>>>> This insight opens up an alternative treatment of these pathological
>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>
>>>>>>
>>>>>> *The halting problem proofs merely show that*
>>>>>> *self-contradictory questions have no correct answer*
>>>>>>
>>>>>> *A self-contradictory question is defined as*
>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>
>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>> that derives a self-contradictory question for this H in that
>>>>>> (a) If this H says that its D will halt, D loops
>>>>>> (b) If this H that says its D will loop it halts.
>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>> each H*
>>>>>
>>>>> *proving that this is literally true*
>>>>> *The halting problem proofs merely show that*
>>>>> *self-contradictory questions have no correct answer*
>>>>>
>>>>
>>>>     Nope, since each specific question HAS
>>>>     a correct answer, it shows that, by your
>>>>     own definition, it isn't "Self-Contradictory"
>>>>
>>>> *That is a deliberate strawman deception paraphrase*
>>>> *That is a deliberate strawman deception paraphrase*
>>>> *That is a deliberate strawman deception paraphrase*
>>>>
>>>> There does not exist a solution to the halting problem because
>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>
>>>> there exists a D that makes the question:
>>>> Does your input halt?
>>>> a self-contradictory thus incorrect question.
>>>
>>>     Where does it say that a Turing
>>>     Machine must exsit to do it?
>>>
>>> *The only reason that no such Turing Machine exists is*
>>>
>>> For every H in the set of all Turing Machines there exists a D
>>> that derives a self-contradictory question for this H in that
>>> (a) If this H says that its D will halt, D loops
>>> (b) If this H that says its D will loop it halts.
>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>
>>> *therefore*
>>>
>>> *The halting problem proofs merely show that*
>>> *self-contradictory questions have no correct answer*
>>
>>     The issue that you ignore is that you are
>>     confalting a set of questions with a question,
>>     and are baseing your logic on a strawman,
>>
>> It is not my mistake. Linguists understand that the
>> context of who is asked a question changes the meaning
>> of the question.
>>
>> This can easily be shown to apply to decision problem
>> instances as follows:
>>
>> In that H.true and H.false are the wrong answer when
>> D calls H to do the opposite of whatever value that
>> either H returns.
>>
>> Whereas exactly one of H1.true or H1.false is correct
>> for this exact same D.
>>
>> This proves that the question: "Does your input halt?"
>> has a different meaning across the H and H1 pairs.
>
>    It *CAN* if the question ask something about
>    the person being questioned.
>
>    But it *CAN'T* if the question doesn't in any
>    way reffer to who you ask.
>
> D calls H thus D DOES refer to H
> D does not call H1 therefore D does not refer to H1
>

The QUESTION doesn't refer to the person
being asked?

That D calls H doesn't REFER to the asker,
but to a specific machine.

For the H/D pair D does refer to the specific
machine being asked: Does your input halt?
D knows about and references H.

For the H1/D pair D does not refer to the specific
machine being asked: Does your input halt?
D does not know about or reference H1.

If these things were not extremely difficult to
understand they would have been addressed before
publication in 1936.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhpcel$3b08n$5@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=12014&group=comp.ai.philosophy#12014

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 15:58:30 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhpcel$3b08n$5@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 22:58:29 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3506455"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhpbot$lib1$1@dont-email.me>
 by: Richard Damon - Mon, 30 Oct 2023 22:58 UTC

On 10/30/23 3:46 PM, olcott wrote:
> On 10/30/2023 5:10 PM, olcott wrote:
>> On 10/30/2023 3:11 PM, olcott wrote:
>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>> No computer program H can correctly predict what another computer
>>>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>>>> whatever H says.
>>>>>>>>
>>>>>>>> H(D) is functional notation that specifies the return value from
>>>>>>>> H(D)
>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not
>>>>>>>> halt
>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>
>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>
>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>> The halting problem is defined as an unsatisfiable specification
>>>>>>>> thus
>>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>>> answer.
>>>>>>>>
>>>>>>>> What time is it (yes or no)?
>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>> question. In this case we know to blame the question and not the
>>>>>>>> one
>>>>>>>> answering it.
>>>>>>>>
>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>> contradict both Boolean return values that H could return then the
>>>>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>
>>>>>>>> The inability to correctly answer an incorrect question places
>>>>>>>> no actual
>>>>>>>> limit on anyone or anything.
>>>>>>>>
>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>> pathological
>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>
>>>>>>>
>>>>>>> *The halting problem proofs merely show that*
>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>
>>>>>>> *A self-contradictory question is defined as*
>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>
>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>> each H*
>>>>>>
>>>>>> *proving that this is literally true*
>>>>>> *The halting problem proofs merely show that*
>>>>>> *self-contradictory questions have no correct answer*
>>>>>>
>>>>>
>>>>>     Nope, since each specific question HAS
>>>>>     a correct answer, it shows that, by your
>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>
>>>>> *That is a deliberate strawman deception paraphrase*
>>>>> *That is a deliberate strawman deception paraphrase*
>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>
>>>>> There does not exist a solution to the halting problem because
>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>
>>>>> there exists a D that makes the question:
>>>>> Does your input halt?
>>>>> a self-contradictory thus incorrect question.
>>>>
>>>>     Where does it say that a Turing
>>>>     Machine must exsit to do it?
>>>>
>>>> *The only reason that no such Turing Machine exists is*
>>>>
>>>> For every H in the set of all Turing Machines there exists a D
>>>> that derives a self-contradictory question for this H in that
>>>> (a) If this H says that its D will halt, D loops
>>>> (b) If this H that says its D will loop it halts.
>>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>>
>>>> *therefore*
>>>>
>>>> *The halting problem proofs merely show that*
>>>> *self-contradictory questions have no correct answer*
>>>
>>>     The issue that you ignore is that you are
>>>     confalting a set of questions with a question,
>>>     and are baseing your logic on a strawman,
>>>
>>> It is not my mistake. Linguists understand that the
>>> context of who is asked a question changes the meaning
>>> of the question.
>>>
>>> This can easily be shown to apply to decision problem
>>> instances as follows:
>>>
>>> In that H.true and H.false are the wrong answer when
>>> D calls H to do the opposite of whatever value that
>>> either H returns.
>>>
>>> Whereas exactly one of H1.true or H1.false is correct
>>> for this exact same D.
>>>
>>> This proves that the question: "Does your input halt?"
>>> has a different meaning across the H and H1 pairs.
>>
>>     It *CAN* if the question ask something about
>>     the person being questioned.
>>
>>     But it *CAN'T* if the question doesn't in any
>>     way reffer to who you ask.
>>
>> D calls H thus D DOES refer to H
>> D does not call H1 therefore D does not refer to H1
>>
>
>    The QUESTION doesn't refer to the person
>    being asked?
>
>    That D calls H doesn't REFER to the asker,
>    but to a specific machine.
>
> For the H/D pair D does refer to the specific
> machine being asked: Does your input halt?
> D knows about and references H.

Nope. The question does this input representing D(D) Halt does NOT refer
to any particular decider, just what ever one this is given to.

>
> For the H1/D pair D does not refer to the specific
> machine being asked: Does your input halt?
> D does not know about or reference H1.
>
> If these things were not extremely difficult to
> understand they would have been addressed before
> publication in 1936.
>

They are only "exteremly difficult to understand" because they are FALSE
statements,

You are just too stupid to understand that the Halting question is:

"Does the computation represented by the input Halt?" doesn't have
ANYTHING in it that refers to the machine doing the deciding, and the
input being represented also doesn't refer to the machine doing the
deciding, but only a particular decider that it is designed to foil.

Just be cause we give it to that one, doesn't make it "refer" to the one
being asked.

You are just FAILING basic logic theory, because you are showing
yourself to be a total idiot.

Please find references for you "claims" and definition that are reliable
sources.

Re: Does the halting problem actually limit what computers can do?

<uhpdj4$m00m$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=12015&group=comp.ai.philosophy#12015

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 18:17:55 -0500
Organization: A noiseless patient Spider
Lines: 155
Message-ID: <uhpdj4$m00m$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 23:17:56 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="3083f5adb2def64a185472ea93a42693";
logging-data="720918"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/RLM7S8iEFAzcUq7oQMCel"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:Fx69Bp0ZB5RNb5Qzm3oYbQDkwGI=
Content-Language: en-US
In-Reply-To: <uhpbot$lib1$1@dont-email.me>
 by: olcott - Mon, 30 Oct 2023 23:17 UTC

On 10/30/2023 5:46 PM, olcott wrote:
> On 10/30/2023 5:10 PM, olcott wrote:
>> On 10/30/2023 3:11 PM, olcott wrote:
>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>> No computer program H can correctly predict what another computer
>>>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>>>> whatever H says.
>>>>>>>>
>>>>>>>> H(D) is functional notation that specifies the return value from
>>>>>>>> H(D)
>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not
>>>>>>>> halt
>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>
>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>
>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>> The halting problem is defined as an unsatisfiable specification
>>>>>>>> thus
>>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>>> answer.
>>>>>>>>
>>>>>>>> What time is it (yes or no)?
>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>> question. In this case we know to blame the question and not the
>>>>>>>> one
>>>>>>>> answering it.
>>>>>>>>
>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>> contradict both Boolean return values that H could return then the
>>>>>>>> question: Does your input halt? is essentially a self-contradictory
>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>
>>>>>>>> The inability to correctly answer an incorrect question places
>>>>>>>> no actual
>>>>>>>> limit on anyone or anything.
>>>>>>>>
>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>> pathological
>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>
>>>>>>>
>>>>>>> *The halting problem proofs merely show that*
>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>
>>>>>>> *A self-contradictory question is defined as*
>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>
>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>> each H*
>>>>>>
>>>>>> *proving that this is literally true*
>>>>>> *The halting problem proofs merely show that*
>>>>>> *self-contradictory questions have no correct answer*
>>>>>>
>>>>>
>>>>>     Nope, since each specific question HAS
>>>>>     a correct answer, it shows that, by your
>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>
>>>>> *That is a deliberate strawman deception paraphrase*
>>>>> *That is a deliberate strawman deception paraphrase*
>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>
>>>>> There does not exist a solution to the halting problem because
>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>
>>>>> there exists a D that makes the question:
>>>>> Does your input halt?
>>>>> a self-contradictory thus incorrect question.
>>>>
>>>>     Where does it say that a Turing
>>>>     Machine must exsit to do it?
>>>>
>>>> *The only reason that no such Turing Machine exists is*
>>>>
>>>> For every H in the set of all Turing Machines there exists a D
>>>> that derives a self-contradictory question for this H in that
>>>> (a) If this H says that its D will halt, D loops
>>>> (b) If this H that says its D will loop it halts.
>>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>>
>>>> *therefore*
>>>>
>>>> *The halting problem proofs merely show that*
>>>> *self-contradictory questions have no correct answer*
>>>
>>>     The issue that you ignore is that you are
>>>     confalting a set of questions with a question,
>>>     and are baseing your logic on a strawman,
>>>
>>> It is not my mistake. Linguists understand that the
>>> context of who is asked a question changes the meaning
>>> of the question.
>>>
>>> This can easily be shown to apply to decision problem
>>> instances as follows:
>>>
>>> In that H.true and H.false are the wrong answer when
>>> D calls H to do the opposite of whatever value that
>>> either H returns.
>>>
>>> Whereas exactly one of H1.true or H1.false is correct
>>> for this exact same D.
>>>
>>> This proves that the question: "Does your input halt?"
>>> has a different meaning across the H and H1 pairs.
>>
>>     It *CAN* if the question ask something about
>>     the person being questioned.
>>
>>     But it *CAN'T* if the question doesn't in any
>>     way reffer to who you ask.
>>
>> D calls H thus D DOES refer to H
>> D does not call H1 therefore D does not refer to H1
>>
>
>    The QUESTION doesn't refer to the person
>    being asked?
>
>    That D calls H doesn't REFER to the asker,
>    but to a specific machine.
>
> For the H/D pair D does refer to the specific
> machine being asked: Does your input halt?
> D knows about and references H.

Nope. The question does this input representing
D(D) Halt does NOT refer to any particular decider,
just what ever one this is given to.

*You can ignore that D calls H none-the-less when D*
*calls H this does mean that D <is> referencing H*

The only way that I can tell that I am proving my point
is that rebuttals from people that are stuck in rebuttal
mode become increasingly nonsensical.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhpf53$3b08m$1@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=12016&group=comp.ai.philosophy#12016

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 16:44:36 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhpf53$3b08m$1@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
<uhpdj4$m00m$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 30 Oct 2023 23:44:36 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3506454"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhpdj4$m00m$1@dont-email.me>
 by: Richard Damon - Mon, 30 Oct 2023 23:44 UTC

On 10/30/23 4:17 PM, olcott wrote:
> On 10/30/2023 5:46 PM, olcott wrote:
>> On 10/30/2023 5:10 PM, olcott wrote:
>>> On 10/30/2023 3:11 PM, olcott wrote:
>>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>>> No computer program H can correctly predict what another computer
>>>>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>>>>> whatever H says.
>>>>>>>>>
>>>>>>>>> H(D) is functional notation that specifies the return value
>>>>>>>>> from H(D)
>>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not
>>>>>>>>> halt
>>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>>
>>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>>
>>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>>> The halting problem is defined as an unsatisfiable
>>>>>>>>> specification thus
>>>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>>>> answer.
>>>>>>>>>
>>>>>>>>> What time is it (yes or no)?
>>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>>> question. In this case we know to blame the question and not
>>>>>>>>> the one
>>>>>>>>> answering it.
>>>>>>>>>
>>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>>> contradict both Boolean return values that H could return then the
>>>>>>>>> question: Does your input halt? is essentially a
>>>>>>>>> self-contradictory
>>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>>
>>>>>>>>> The inability to correctly answer an incorrect question places
>>>>>>>>> no actual
>>>>>>>>> limit on anyone or anything.
>>>>>>>>>
>>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>>> pathological
>>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>>
>>>>>>>>
>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>
>>>>>>>> *A self-contradictory question is defined as*
>>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>>
>>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>>> each H*
>>>>>>>
>>>>>>> *proving that this is literally true*
>>>>>>> *The halting problem proofs merely show that*
>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>
>>>>>>
>>>>>>     Nope, since each specific question HAS
>>>>>>     a correct answer, it shows that, by your
>>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>>
>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>
>>>>>> There does not exist a solution to the halting problem because
>>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>>
>>>>>> there exists a D that makes the question:
>>>>>> Does your input halt?
>>>>>> a self-contradictory thus incorrect question.
>>>>>
>>>>>     Where does it say that a Turing
>>>>>     Machine must exsit to do it?
>>>>>
>>>>> *The only reason that no such Turing Machine exists is*
>>>>>
>>>>> For every H in the set of all Turing Machines there exists a D
>>>>> that derives a self-contradictory question for this H in that
>>>>> (a) If this H says that its D will halt, D loops
>>>>> (b) If this H that says its D will loop it halts.
>>>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>>>
>>>>> *therefore*
>>>>>
>>>>> *The halting problem proofs merely show that*
>>>>> *self-contradictory questions have no correct answer*
>>>>
>>>>     The issue that you ignore is that you are
>>>>     confalting a set of questions with a question,
>>>>     and are baseing your logic on a strawman,
>>>>
>>>> It is not my mistake. Linguists understand that the
>>>> context of who is asked a question changes the meaning
>>>> of the question.
>>>>
>>>> This can easily be shown to apply to decision problem
>>>> instances as follows:
>>>>
>>>> In that H.true and H.false are the wrong answer when
>>>> D calls H to do the opposite of whatever value that
>>>> either H returns.
>>>>
>>>> Whereas exactly one of H1.true or H1.false is correct
>>>> for this exact same D.
>>>>
>>>> This proves that the question: "Does your input halt?"
>>>> has a different meaning across the H and H1 pairs.
>>>
>>>     It *CAN* if the question ask something about
>>>     the person being questioned.
>>>
>>>     But it *CAN'T* if the question doesn't in any
>>>     way reffer to who you ask.
>>>
>>> D calls H thus D DOES refer to H
>>> D does not call H1 therefore D does not refer to H1
>>>
>>
>>     The QUESTION doesn't refer to the person
>>     being asked?
>>
>>     That D calls H doesn't REFER to the asker,
>>     but to a specific machine.
>>
>> For the H/D pair D does refer to the specific
>> machine being asked: Does your input halt?
>> D knows about and references H.
>
>   Nope. The question does this input representing
>   D(D) Halt does NOT refer to any particular decider,
>   just what ever one this is given to.
>
> *You can ignore that D calls H none-the-less when D*
> *calls H this does mean that D <is> referencing H*
>
> The only way that I can tell that I am proving my point
> is that rebuttals from people that are stuck in rebuttal
> mode become increasingly nonsensical.
>

CALLING H doesn't REFER to the decider deciding it.

Note key difference, a Turing machine can have a copy of the code for
another machine, but it doesn't "Refer" to it. as any changes to that
machine after making the first machine doesn't change it.

That is the key point you miss.

D has the code for the H that you are claiming to give the right value,
when you try to vary it to prove something, that DOESN'T change D, as D
had a copy of the original code of H, not a "reference" to H,


Click here to read the complete article
Re: Does the halting problem actually limit what computers can do?

<uhpgan$md1k$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=12017&group=comp.ai.philosophy#12017

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!rocksolid2!news.neodome.net!news.mixmin.net!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 19:04:38 -0500
Organization: A noiseless patient Spider
Lines: 162
Message-ID: <uhpgan$md1k$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
<uhpdj4$m00m$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 31 Oct 2023 00:04:39 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="3083f5adb2def64a185472ea93a42693";
logging-data="734260"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/RGDK45Je4mKLY4musehJ0"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:XEm0DjRZ4AlrCxlKMsDPWxex970=
In-Reply-To: <uhpdj4$m00m$1@dont-email.me>
Content-Language: en-US
 by: olcott - Tue, 31 Oct 2023 00:04 UTC

On 10/30/2023 6:17 PM, olcott wrote:
> On 10/30/2023 5:46 PM, olcott wrote:
>> On 10/30/2023 5:10 PM, olcott wrote:
>>> On 10/30/2023 3:11 PM, olcott wrote:
>>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>>> No computer program H can correctly predict what another computer
>>>>>>>>> program D will do when D has been programmed to do the opposite of
>>>>>>>>> whatever H says.
>>>>>>>>>
>>>>>>>>> H(D) is functional notation that specifies the return value
>>>>>>>>> from H(D)
>>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does not
>>>>>>>>> halt
>>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>>
>>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>>
>>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>>> The halting problem is defined as an unsatisfiable
>>>>>>>>> specification thus
>>>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>>>> answer.
>>>>>>>>>
>>>>>>>>> What time is it (yes or no)?
>>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>>> question. In this case we know to blame the question and not
>>>>>>>>> the one
>>>>>>>>> answering it.
>>>>>>>>>
>>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>>> contradict both Boolean return values that H could return then the
>>>>>>>>> question: Does your input halt? is essentially a
>>>>>>>>> self-contradictory
>>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>>
>>>>>>>>> The inability to correctly answer an incorrect question places
>>>>>>>>> no actual
>>>>>>>>> limit on anyone or anything.
>>>>>>>>>
>>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>>> pathological
>>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>>
>>>>>>>>
>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>
>>>>>>>> *A self-contradictory question is defined as*
>>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>>
>>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>>> each H*
>>>>>>>
>>>>>>> *proving that this is literally true*
>>>>>>> *The halting problem proofs merely show that*
>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>
>>>>>>
>>>>>>     Nope, since each specific question HAS
>>>>>>     a correct answer, it shows that, by your
>>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>>
>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>
>>>>>> There does not exist a solution to the halting problem because
>>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>> *for every Turing Machine of the infinite set of all Turing machines*
>>>>>>
>>>>>> there exists a D that makes the question:
>>>>>> Does your input halt?
>>>>>> a self-contradictory thus incorrect question.
>>>>>
>>>>>     Where does it say that a Turing
>>>>>     Machine must exsit to do it?
>>>>>
>>>>> *The only reason that no such Turing Machine exists is*
>>>>>
>>>>> For every H in the set of all Turing Machines there exists a D
>>>>> that derives a self-contradictory question for this H in that
>>>>> (a) If this H says that its D will halt, D loops
>>>>> (b) If this H that says its D will loop it halts.
>>>>> *Thus the question: Does D halt? is contradicted by some D for each H*
>>>>>
>>>>> *therefore*
>>>>>
>>>>> *The halting problem proofs merely show that*
>>>>> *self-contradictory questions have no correct answer*
>>>>
>>>>     The issue that you ignore is that you are
>>>>     confalting a set of questions with a question,
>>>>     and are baseing your logic on a strawman,
>>>>
>>>> It is not my mistake. Linguists understand that the
>>>> context of who is asked a question changes the meaning
>>>> of the question.
>>>>
>>>> This can easily be shown to apply to decision problem
>>>> instances as follows:
>>>>
>>>> In that H.true and H.false are the wrong answer when
>>>> D calls H to do the opposite of whatever value that
>>>> either H returns.
>>>>
>>>> Whereas exactly one of H1.true or H1.false is correct
>>>> for this exact same D.
>>>>
>>>> This proves that the question: "Does your input halt?"
>>>> has a different meaning across the H and H1 pairs.
>>>
>>>     It *CAN* if the question ask something about
>>>     the person being questioned.
>>>
>>>     But it *CAN'T* if the question doesn't in any
>>>     way reffer to who you ask.
>>>
>>> D calls H thus D DOES refer to H
>>> D does not call H1 therefore D does not refer to H1
>>>
>>
>>     The QUESTION doesn't refer to the person
>>     being asked?
>>
>>     That D calls H doesn't REFER to the asker,
>>     but to a specific machine.
>>
>> For the H/D pair D does refer to the specific
>> machine being asked: Does your input halt?
>> D knows about and references H.
>
>   Nope. The question does this input representing
>   D(D) Halt does NOT refer to any particular decider,
>   just what ever one this is given to.
>
> *You can ignore that D calls H none-the-less when D*
> *calls H this does mean that D <is> referencing H*
>
> The only way that I can tell that I am proving my point
> is that rebuttals from people that are stuck in rebuttal
> mode become increasingly nonsensical.
>

"CALLING H doesn't REFER to the decider deciding it."

Sure it does with H(D,D) D is calling the decider deciding it.

--
Copyright 2023 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer

Re: Does the halting problem actually limit what computers can do?

<uhphpv$3bucv$1@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=12018&group=comp.ai.philosophy#12018

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 17:29:50 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhphpv$3bucv$1@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
<uhpdj4$m00m$1@dont-email.me> <uhpgan$md1k$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 31 Oct 2023 00:29:51 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3537311"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhpgan$md1k$1@dont-email.me>
 by: Richard Damon - Tue, 31 Oct 2023 00:29 UTC

On 10/30/23 5:04 PM, olcott wrote:
> On 10/30/2023 6:17 PM, olcott wrote:
>> On 10/30/2023 5:46 PM, olcott wrote:
>>> On 10/30/2023 5:10 PM, olcott wrote:
>>>> On 10/30/2023 3:11 PM, olcott wrote:
>>>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>>>> No computer program H can correctly predict what another computer
>>>>>>>>>> program D will do when D has been programmed to do the
>>>>>>>>>> opposite of
>>>>>>>>>> whatever H says.
>>>>>>>>>>
>>>>>>>>>> H(D) is functional notation that specifies the return value
>>>>>>>>>> from H(D)
>>>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does
>>>>>>>>>> not halt
>>>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>>>
>>>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>>>
>>>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>>>> The halting problem is defined as an unsatisfiable
>>>>>>>>>> specification thus
>>>>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>>>>> answer.
>>>>>>>>>>
>>>>>>>>>> What time is it (yes or no)?
>>>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>>>> question. In this case we know to blame the question and not
>>>>>>>>>> the one
>>>>>>>>>> answering it.
>>>>>>>>>>
>>>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>>>> contradict both Boolean return values that H could return then
>>>>>>>>>> the
>>>>>>>>>> question: Does your input halt? is essentially a
>>>>>>>>>> self-contradictory
>>>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>>>
>>>>>>>>>> The inability to correctly answer an incorrect question places
>>>>>>>>>> no actual
>>>>>>>>>> limit on anyone or anything.
>>>>>>>>>>
>>>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>>>> pathological
>>>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>>
>>>>>>>>> *A self-contradictory question is defined as*
>>>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>>>
>>>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>>>> each H*
>>>>>>>>
>>>>>>>> *proving that this is literally true*
>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>
>>>>>>>
>>>>>>>     Nope, since each specific question HAS
>>>>>>>     a correct answer, it shows that, by your
>>>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>>>
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>>
>>>>>>> There does not exist a solution to the halting problem because
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>>
>>>>>>> there exists a D that makes the question:
>>>>>>> Does your input halt?
>>>>>>> a self-contradictory thus incorrect question.
>>>>>>
>>>>>>     Where does it say that a Turing
>>>>>>     Machine must exsit to do it?
>>>>>>
>>>>>> *The only reason that no such Turing Machine exists is*
>>>>>>
>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>> that derives a self-contradictory question for this H in that
>>>>>> (a) If this H says that its D will halt, D loops
>>>>>> (b) If this H that says its D will loop it halts.
>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>> each H*
>>>>>>
>>>>>> *therefore*
>>>>>>
>>>>>> *The halting problem proofs merely show that*
>>>>>> *self-contradictory questions have no correct answer*
>>>>>
>>>>>     The issue that you ignore is that you are
>>>>>     confalting a set of questions with a question,
>>>>>     and are baseing your logic on a strawman,
>>>>>
>>>>> It is not my mistake. Linguists understand that the
>>>>> context of who is asked a question changes the meaning
>>>>> of the question.
>>>>>
>>>>> This can easily be shown to apply to decision problem
>>>>> instances as follows:
>>>>>
>>>>> In that H.true and H.false are the wrong answer when
>>>>> D calls H to do the opposite of whatever value that
>>>>> either H returns.
>>>>>
>>>>> Whereas exactly one of H1.true or H1.false is correct
>>>>> for this exact same D.
>>>>>
>>>>> This proves that the question: "Does your input halt?"
>>>>> has a different meaning across the H and H1 pairs.
>>>>
>>>>     It *CAN* if the question ask something about
>>>>     the person being questioned.
>>>>
>>>>     But it *CAN'T* if the question doesn't in any
>>>>     way reffer to who you ask.
>>>>
>>>> D calls H thus D DOES refer to H
>>>> D does not call H1 therefore D does not refer to H1
>>>>
>>>
>>>     The QUESTION doesn't refer to the person
>>>     being asked?
>>>
>>>     That D calls H doesn't REFER to the asker,
>>>     but to a specific machine.
>>>
>>> For the H/D pair D does refer to the specific
>>> machine being asked: Does your input halt?
>>> D knows about and references H.
>>
>>    Nope. The question does this input representing
>>    D(D) Halt does NOT refer to any particular decider,
>>    just what ever one this is given to.
>>
>> *You can ignore that D calls H none-the-less when D*
>> *calls H this does mean that D <is> referencing H*
>>
>> The only way that I can tell that I am proving my point
>> is that rebuttals from people that are stuck in rebuttal
>> mode become increasingly nonsensical.
>>
>
>    "CALLING H doesn't REFER to the decider deciding it."
>
> Sure it does with H(D,D) D is calling the decider deciding it.
>


Click here to read the complete article
Re: Does the halting problem actually limit what computers can do?

<uhpicn$mn7v$1@dont-email.me>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=12019&group=comp.ai.philosophy#12019

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!i2pn.org!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: polcott2@gmail.com (olcott)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 19:39:51 -0500
Organization: A noiseless patient Spider
Lines: 186
Message-ID: <uhpicn$mn7v$1@dont-email.me>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
<uhpdj4$m00m$1@dont-email.me> <uhpgan$md1k$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 31 Oct 2023 00:39:51 -0000 (UTC)
Injection-Info: dont-email.me; posting-host="3083f5adb2def64a185472ea93a42693";
logging-data="744703"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX1/WOruO8zQt6JuGsLdjSwn4"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:0vnVxymqdLPumA+OdTxp8BRr+Lo=
In-Reply-To: <uhpgan$md1k$1@dont-email.me>
Content-Language: en-US
 by: olcott - Tue, 31 Oct 2023 00:39 UTC

On 10/30/2023 7:04 PM, olcott wrote:
> On 10/30/2023 6:17 PM, olcott wrote:
>> On 10/30/2023 5:46 PM, olcott wrote:
>>> On 10/30/2023 5:10 PM, olcott wrote:
>>>> On 10/30/2023 3:11 PM, olcott wrote:
>>>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>>>> No computer program H can correctly predict what another computer
>>>>>>>>>> program D will do when D has been programmed to do the
>>>>>>>>>> opposite of
>>>>>>>>>> whatever H says.
>>>>>>>>>>
>>>>>>>>>> H(D) is functional notation that specifies the return value
>>>>>>>>>> from H(D)
>>>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does
>>>>>>>>>> not halt
>>>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>>>
>>>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>>>
>>>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>>>> The halting problem is defined as an unsatisfiable
>>>>>>>>>> specification thus
>>>>>>>>>> isomorphic to a question that has been defined to have no correct
>>>>>>>>>> answer.
>>>>>>>>>>
>>>>>>>>>> What time is it (yes or no)?
>>>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>>>> question. In this case we know to blame the question and not
>>>>>>>>>> the one
>>>>>>>>>> answering it.
>>>>>>>>>>
>>>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>>>> contradict both Boolean return values that H could return then
>>>>>>>>>> the
>>>>>>>>>> question: Does your input halt? is essentially a
>>>>>>>>>> self-contradictory
>>>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>>>
>>>>>>>>>> The inability to correctly answer an incorrect question places
>>>>>>>>>> no actual
>>>>>>>>>> limit on anyone or anything.
>>>>>>>>>>
>>>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>>>> pathological
>>>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>>
>>>>>>>>> *A self-contradictory question is defined as*
>>>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>>>
>>>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>>>> each H*
>>>>>>>>
>>>>>>>> *proving that this is literally true*
>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>
>>>>>>>
>>>>>>>     Nope, since each specific question HAS
>>>>>>>     a correct answer, it shows that, by your
>>>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>>>
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>>
>>>>>>> There does not exist a solution to the halting problem because
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>> machines*
>>>>>>>
>>>>>>> there exists a D that makes the question:
>>>>>>> Does your input halt?
>>>>>>> a self-contradictory thus incorrect question.
>>>>>>
>>>>>>     Where does it say that a Turing
>>>>>>     Machine must exsit to do it?
>>>>>>
>>>>>> *The only reason that no such Turing Machine exists is*
>>>>>>
>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>> that derives a self-contradictory question for this H in that
>>>>>> (a) If this H says that its D will halt, D loops
>>>>>> (b) If this H that says its D will loop it halts.
>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>> each H*
>>>>>>
>>>>>> *therefore*
>>>>>>
>>>>>> *The halting problem proofs merely show that*
>>>>>> *self-contradictory questions have no correct answer*
>>>>>
>>>>>     The issue that you ignore is that you are
>>>>>     confalting a set of questions with a question,
>>>>>     and are baseing your logic on a strawman,
>>>>>
>>>>> It is not my mistake. Linguists understand that the
>>>>> context of who is asked a question changes the meaning
>>>>> of the question.
>>>>>
>>>>> This can easily be shown to apply to decision problem
>>>>> instances as follows:
>>>>>
>>>>> In that H.true and H.false are the wrong answer when
>>>>> D calls H to do the opposite of whatever value that
>>>>> either H returns.
>>>>>
>>>>> Whereas exactly one of H1.true or H1.false is correct
>>>>> for this exact same D.
>>>>>
>>>>> This proves that the question: "Does your input halt?"
>>>>> has a different meaning across the H and H1 pairs.
>>>>
>>>>     It *CAN* if the question ask something about
>>>>     the person being questioned.
>>>>
>>>>     But it *CAN'T* if the question doesn't in any
>>>>     way reffer to who you ask.
>>>>
>>>> D calls H thus D DOES refer to H
>>>> D does not call H1 therefore D does not refer to H1
>>>>
>>>
>>>     The QUESTION doesn't refer to the person
>>>     being asked?
>>>
>>>     That D calls H doesn't REFER to the asker,
>>>     but to a specific machine.
>>>
>>> For the H/D pair D does refer to the specific
>>> machine being asked: Does your input halt?
>>> D knows about and references H.
>>
>>    Nope. The question does this input representing
>>    D(D) Halt does NOT refer to any particular decider,
>>    just what ever one this is given to.
>>
>> *You can ignore that D calls H none-the-less when D*
>> *calls H this does mean that D <is> referencing H*
>>
>> The only way that I can tell that I am proving my point
>> is that rebuttals from people that are stuck in rebuttal
>> mode become increasingly nonsensical.
>>
>
>    "CALLING H doesn't REFER to the decider deciding it."
>
> Sure it does with H(D,D) D is calling the decider deciding it.
>


Click here to read the complete article
Re: Does the halting problem actually limit what computers can do?

<uhpjfm$3bucv$2@i2pn2.org>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=12020&group=comp.ai.philosophy#12020

  copy link   Newsgroups: sci.math sci.logic comp.theory comp.ai.philosophy
Path: i2pn2.org!.POSTED!not-for-mail
From: richard@damon-family.org (Richard Damon)
Newsgroups: sci.math,sci.logic,comp.theory,comp.ai.philosophy
Subject: Re: Does the halting problem actually limit what computers can do?
Date: Mon, 30 Oct 2023 17:58:30 -0700
Organization: i2pn2 (i2pn.org)
Message-ID: <uhpjfm$3bucv$2@i2pn2.org>
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
<uhpdj4$m00m$1@dont-email.me> <uhpgan$md1k$1@dont-email.me>
<uhpicn$mn7v$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Tue, 31 Oct 2023 00:58:31 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3537311"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <uhpicn$mn7v$1@dont-email.me>
 by: Richard Damon - Tue, 31 Oct 2023 00:58 UTC

On 10/30/23 5:39 PM, olcott wrote:
> On 10/30/2023 7:04 PM, olcott wrote:
>> On 10/30/2023 6:17 PM, olcott wrote:
>>> On 10/30/2023 5:46 PM, olcott wrote:
>>>> On 10/30/2023 5:10 PM, olcott wrote:
>>>>> On 10/30/2023 3:11 PM, olcott wrote:
>>>>>> On 10/30/2023 1:08 PM, olcott wrote:
>>>>>>> On 10/30/2023 12:23 PM, olcott wrote:
>>>>>>>> On 10/30/2023 11:57 AM, olcott wrote:
>>>>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
>>>>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
>>>>>>>>>>> *Everyone agrees that this is impossible*
>>>>>>>>>>> No computer program H can correctly predict what another
>>>>>>>>>>> computer
>>>>>>>>>>> program D will do when D has been programmed to do the
>>>>>>>>>>> opposite of
>>>>>>>>>>> whatever H says.
>>>>>>>>>>>
>>>>>>>>>>> H(D) is functional notation that specifies the return value
>>>>>>>>>>> from H(D)
>>>>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does
>>>>>>>>>>> not halt
>>>>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
>>>>>>>>>>>
>>>>>>>>>>> For all H ∈ TM there exists input D such that
>>>>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
>>>>>>>>>>>
>>>>>>>>>>> *No one pays attention to what this impossibility means*
>>>>>>>>>>> The halting problem is defined as an unsatisfiable
>>>>>>>>>>> specification thus
>>>>>>>>>>> isomorphic to a question that has been defined to have no
>>>>>>>>>>> correct
>>>>>>>>>>> answer.
>>>>>>>>>>>
>>>>>>>>>>> What time is it (yes or no)?
>>>>>>>>>>> has no correct answer because there is something wrong with the
>>>>>>>>>>> question. In this case we know to blame the question and not
>>>>>>>>>>> the one
>>>>>>>>>>> answering it.
>>>>>>>>>>>
>>>>>>>>>>> When we understand that there are some inputs to every TM H that
>>>>>>>>>>> contradict both Boolean return values that H could return
>>>>>>>>>>> then the
>>>>>>>>>>> question: Does your input halt? is essentially a
>>>>>>>>>>> self-contradictory
>>>>>>>>>>> (thus incorrect) question in these cases.
>>>>>>>>>>>
>>>>>>>>>>> The inability to correctly answer an incorrect question
>>>>>>>>>>> places no actual
>>>>>>>>>>> limit on anyone or anything.
>>>>>>>>>>>
>>>>>>>>>>> This insight opens up an alternative treatment of these
>>>>>>>>>>> pathological
>>>>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>>>
>>>>>>>>>> *A self-contradictory question is defined as*
>>>>>>>>>>    Any yes/no question that contradicts both yes/no answers.
>>>>>>>>>>
>>>>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>>>>> each H*
>>>>>>>>>
>>>>>>>>> *proving that this is literally true*
>>>>>>>>> *The halting problem proofs merely show that*
>>>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>>>>
>>>>>>>>
>>>>>>>>     Nope, since each specific question HAS
>>>>>>>>     a correct answer, it shows that, by your
>>>>>>>>     own definition, it isn't "Self-Contradictory"
>>>>>>>>
>>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>>> *That is a deliberate strawman deception paraphrase*
>>>>>>>>
>>>>>>>> There does not exist a solution to the halting problem because
>>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>>> machines*
>>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>>> machines*
>>>>>>>> *for every Turing Machine of the infinite set of all Turing
>>>>>>>> machines*
>>>>>>>>
>>>>>>>> there exists a D that makes the question:
>>>>>>>> Does your input halt?
>>>>>>>> a self-contradictory thus incorrect question.
>>>>>>>
>>>>>>>     Where does it say that a Turing
>>>>>>>     Machine must exsit to do it?
>>>>>>>
>>>>>>> *The only reason that no such Turing Machine exists is*
>>>>>>>
>>>>>>> For every H in the set of all Turing Machines there exists a D
>>>>>>> that derives a self-contradictory question for this H in that
>>>>>>> (a) If this H says that its D will halt, D loops
>>>>>>> (b) If this H that says its D will loop it halts.
>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
>>>>>>> each H*
>>>>>>>
>>>>>>> *therefore*
>>>>>>>
>>>>>>> *The halting problem proofs merely show that*
>>>>>>> *self-contradictory questions have no correct answer*
>>>>>>
>>>>>>     The issue that you ignore is that you are
>>>>>>     confalting a set of questions with a question,
>>>>>>     and are baseing your logic on a strawman,
>>>>>>
>>>>>> It is not my mistake. Linguists understand that the
>>>>>> context of who is asked a question changes the meaning
>>>>>> of the question.
>>>>>>
>>>>>> This can easily be shown to apply to decision problem
>>>>>> instances as follows:
>>>>>>
>>>>>> In that H.true and H.false are the wrong answer when
>>>>>> D calls H to do the opposite of whatever value that
>>>>>> either H returns.
>>>>>>
>>>>>> Whereas exactly one of H1.true or H1.false is correct
>>>>>> for this exact same D.
>>>>>>
>>>>>> This proves that the question: "Does your input halt?"
>>>>>> has a different meaning across the H and H1 pairs.
>>>>>
>>>>>     It *CAN* if the question ask something about
>>>>>     the person being questioned.
>>>>>
>>>>>     But it *CAN'T* if the question doesn't in any
>>>>>     way reffer to who you ask.
>>>>>
>>>>> D calls H thus D DOES refer to H
>>>>> D does not call H1 therefore D does not refer to H1
>>>>>
>>>>
>>>>     The QUESTION doesn't refer to the person
>>>>     being asked?
>>>>
>>>>     That D calls H doesn't REFER to the asker,
>>>>     but to a specific machine.
>>>>
>>>> For the H/D pair D does refer to the specific
>>>> machine being asked: Does your input halt?
>>>> D knows about and references H.
>>>
>>>    Nope. The question does this input representing
>>>    D(D) Halt does NOT refer to any particular decider,
>>>    just what ever one this is given to.
>>>
>>> *You can ignore that D calls H none-the-less when D*
>>> *calls H this does mean that D <is> referencing H*
>>>
>>> The only way that I can tell that I am proving my point
>>> is that rebuttals from people that are stuck in rebuttal
>>> mode become increasingly nonsensical.
>>>
>>
>>     "CALLING H doesn't REFER to the decider deciding it."
>>
>> Sure it does with H(D,D) D is calling the decider deciding it.
>>
>
>    Nope, D is calling the original H, no matter
>    WHAT decider is deciding it.
>
> Duh? calling the original decider when
> the original decider is deciding it


Click here to read the complete article
Re: Does the halting problem actually limit what computers can do?

<fa0bdfc0-dc81-4bbd-a247-886be485c846n@googlegroups.com>

  copy mid

https://www.rocksolidbbs.com/computers/article-flat.php?id=12022&group=comp.ai.philosophy#12022

  copy link   Newsgroups: comp.ai.philosophy
X-Received: by 2002:ac8:6e8b:0:b0:41b:819a:14cd with SMTP id c11-20020ac86e8b000000b0041b819a14cdmr226639qtv.1.1698757070198;
Tue, 31 Oct 2023 05:57:50 -0700 (PDT)
X-Received: by 2002:a05:6870:36d4:b0:1e9:9202:20c6 with SMTP id
u20-20020a05687036d400b001e9920220c6mr6421052oak.0.1698757069963; Tue, 31 Oct
2023 05:57:49 -0700 (PDT)
Path: i2pn2.org!i2pn.org!news.neodome.net!feeder1.feed.usenet.farm!feed.usenet.farm!peer01.ams4!peer.am4.highwinds-media.com!peer02.iad!feed-me.highwinds-media.com!news.highwinds-media.com!news-out.google.com!nntp.google.com!postnews.google.com!google-groups.googlegroups.com!not-for-mail
Newsgroups: comp.ai.philosophy
Date: Tue, 31 Oct 2023 05:57:49 -0700 (PDT)
In-Reply-To: <uhpjfm$3bucv$2@i2pn2.org>
Injection-Info: google-groups.googlegroups.com; posting-host=104.243.4.3; posting-account=iBgNeAoAAADRhzuSC4Ai7MUeMmxtwlM7
NNTP-Posting-Host: 104.243.4.3
References: <uhm4r5$7n5$1@dont-email.me> <uholm0$hgki$1@dont-email.me>
<uhon90$hqbs$1@dont-email.me> <uhoopq$i2kc$1@dont-email.me>
<uhore5$im5o$1@dont-email.me> <uhp2m7$k1ls$1@dont-email.me>
<uhp9k7$l6kb$1@dont-email.me> <uhpbot$lib1$1@dont-email.me>
<uhpdj4$m00m$1@dont-email.me> <uhpgan$md1k$1@dont-email.me>
<uhpicn$mn7v$1@dont-email.me> <uhpjfm$3bucv$2@i2pn2.org>
User-Agent: G2/1.0
MIME-Version: 1.0
Message-ID: <fa0bdfc0-dc81-4bbd-a247-886be485c846n@googlegroups.com>
Subject: Re: Does the halting problem actually limit what computers can do?
From: donstockbauer@hotmail.com (Don Stockbauer)
Injection-Date: Tue, 31 Oct 2023 12:57:50 +0000
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Received-Bytes: 11343
 by: Don Stockbauer - Tue, 31 Oct 2023 12:57 UTC

On Monday, October 30, 2023 at 7:58:35 PM UTC-5, Richard Damon wrote:
> On 10/30/23 5:39 PM, olcott wrote:
> > On 10/30/2023 7:04 PM, olcott wrote:
> >> On 10/30/2023 6:17 PM, olcott wrote:
> >>> On 10/30/2023 5:46 PM, olcott wrote:
> >>>> On 10/30/2023 5:10 PM, olcott wrote:
> >>>>> On 10/30/2023 3:11 PM, olcott wrote:
> >>>>>> On 10/30/2023 1:08 PM, olcott wrote:
> >>>>>>> On 10/30/2023 12:23 PM, olcott wrote:
> >>>>>>>> On 10/30/2023 11:57 AM, olcott wrote:
> >>>>>>>>> On 10/30/2023 11:29 AM, olcott wrote:
> >>>>>>>>>> On 10/29/2023 12:30 PM, olcott wrote:
> >>>>>>>>>>> *Everyone agrees that this is impossible*
> >>>>>>>>>>> No computer program H can correctly predict what another
> >>>>>>>>>>> computer
> >>>>>>>>>>> program D will do when D has been programmed to do the
> >>>>>>>>>>> opposite of
> >>>>>>>>>>> whatever H says.
> >>>>>>>>>>>
> >>>>>>>>>>> H(D) is functional notation that specifies the return value
> >>>>>>>>>>> from H(D)
> >>>>>>>>>>> Correct(H(D)==false) means that H(D) is correct that D does
> >>>>>>>>>>> not halt
> >>>>>>>>>>> Correct(H(D)==true) means that H(D) is correct that D does halt
> >>>>>>>>>>>
> >>>>>>>>>>> For all H ∈ TM there exists input D such that
> >>>>>>>>>>> (Correct(H(D)==false) ∨ (Correct(H(D)==true))==false
> >>>>>>>>>>>
> >>>>>>>>>>> *No one pays attention to what this impossibility means*
> >>>>>>>>>>> The halting problem is defined as an unsatisfiable
> >>>>>>>>>>> specification thus
> >>>>>>>>>>> isomorphic to a question that has been defined to have no
> >>>>>>>>>>> correct
> >>>>>>>>>>> answer.
> >>>>>>>>>>>
> >>>>>>>>>>> What time is it (yes or no)?
> >>>>>>>>>>> has no correct answer because there is something wrong with the
> >>>>>>>>>>> question. In this case we know to blame the question and not
> >>>>>>>>>>> the one
> >>>>>>>>>>> answering it.
> >>>>>>>>>>>
> >>>>>>>>>>> When we understand that there are some inputs to every TM H that
> >>>>>>>>>>> contradict both Boolean return values that H could return
> >>>>>>>>>>> then the
> >>>>>>>>>>> question: Does your input halt? is essentially a
> >>>>>>>>>>> self-contradictory
> >>>>>>>>>>> (thus incorrect) question in these cases.
> >>>>>>>>>>>
> >>>>>>>>>>> The inability to correctly answer an incorrect question
> >>>>>>>>>>> places no actual
> >>>>>>>>>>> limit on anyone or anything.
> >>>>>>>>>>>
> >>>>>>>>>>> This insight opens up an alternative treatment of these
> >>>>>>>>>>> pathological
> >>>>>>>>>>> inputs the same way that ZFC handled Russell's Paradox.
> >>>>>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> *The halting problem proofs merely show that*
> >>>>>>>>>> *self-contradictory questions have no correct answer*
> >>>>>>>>>>
> >>>>>>>>>> *A self-contradictory question is defined as*
> >>>>>>>>>> Any yes/no question that contradicts both yes/no answers.
> >>>>>>>>>>
> >>>>>>>>>> For every H in the set of all Turing Machines there exists a D
> >>>>>>>>>> that derives a self-contradictory question for this H in that
> >>>>>>>>>> (a) If this H says that its D will halt, D loops
> >>>>>>>>>> (b) If this H that says its D will loop it halts.
> >>>>>>>>>> *Thus the question: Does D halt? is contradicted by some D for
> >>>>>>>>>> each H*
> >>>>>>>>>
> >>>>>>>>> *proving that this is literally true*
> >>>>>>>>> *The halting problem proofs merely show that*
> >>>>>>>>> *self-contradictory questions have no correct answer*
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> Nope, since each specific question HAS
> >>>>>>>> a correct answer, it shows that, by your
> >>>>>>>> own definition, it isn't "Self-Contradictory"
> >>>>>>>>
> >>>>>>>> *That is a deliberate strawman deception paraphrase*
> >>>>>>>> *That is a deliberate strawman deception paraphrase*
> >>>>>>>> *That is a deliberate strawman deception paraphrase*
> >>>>>>>>
> >>>>>>>> There does not exist a solution to the halting problem because
> >>>>>>>> *for every Turing Machine of the infinite set of all Turing
> >>>>>>>> machines*
> >>>>>>>> *for every Turing Machine of the infinite set of all Turing
> >>>>>>>> machines*
> >>>>>>>> *for every Turing Machine of the infinite set of all Turing
> >>>>>>>> machines*
> >>>>>>>>
> >>>>>>>> there exists a D that makes the question:
> >>>>>>>> Does your input halt?
> >>>>>>>> a self-contradictory thus incorrect question.
> >>>>>>>
> >>>>>>> Where does it say that a Turing
> >>>>>>> Machine must exsit to do it?
> >>>>>>>
> >>>>>>> *The only reason that no such Turing Machine exists is*
> >>>>>>>
> >>>>>>> For every H in the set of all Turing Machines there exists a D
> >>>>>>> that derives a self-contradictory question for this H in that
> >>>>>>> (a) If this H says that its D will halt, D loops
> >>>>>>> (b) If this H that says its D will loop it halts.
> >>>>>>> *Thus the question: Does D halt? is contradicted by some D for
> >>>>>>> each H*
> >>>>>>>
> >>>>>>> *therefore*
> >>>>>>>
> >>>>>>> *The halting problem proofs merely show that*
> >>>>>>> *self-contradictory questions have no correct answer*
> >>>>>>
> >>>>>> The issue that you ignore is that you are
> >>>>>> confalting a set of questions with a question,
> >>>>>> and are baseing your logic on a strawman,
> >>>>>>
> >>>>>> It is not my mistake. Linguists understand that the
> >>>>>> context of who is asked a question changes the meaning
> >>>>>> of the question.
> >>>>>>
> >>>>>> This can easily be shown to apply to decision problem
> >>>>>> instances as follows:
> >>>>>>
> >>>>>> In that H.true and H.false are the wrong answer when
> >>>>>> D calls H to do the opposite of whatever value that
> >>>>>> either H returns.
> >>>>>>
> >>>>>> Whereas exactly one of H1.true or H1.false is correct
> >>>>>> for this exact same D.
> >>>>>>
> >>>>>> This proves that the question: "Does your input halt?"
> >>>>>> has a different meaning across the H and H1 pairs.
> >>>>>
> >>>>> It *CAN* if the question ask something about
> >>>>> the person being questioned.
> >>>>>
> >>>>> But it *CAN'T* if the question doesn't in any
> >>>>> way reffer to who you ask.
> >>>>>
> >>>>> D calls H thus D DOES refer to H
> >>>>> D does not call H1 therefore D does not refer to H1
> >>>>>
> >>>>
> >>>> The QUESTION doesn't refer to the person
> >>>> being asked?
> >>>>
> >>>> That D calls H doesn't REFER to the asker,
> >>>> but to a specific machine.
> >>>>
> >>>> For the H/D pair D does refer to the specific
> >>>> machine being asked: Does your input halt?
> >>>> D knows about and references H.
> >>>
> >>> Nope. The question does this input representing
> >>> D(D) Halt does NOT refer to any particular decider,
> >>> just what ever one this is given to.
> >>>
> >>> *You can ignore that D calls H none-the-less when D*
> >>> *calls H this does mean that D <is> referencing H*
> >>>
> >>> The only way that I can tell that I am proving my point
> >>> is that rebuttals from people that are stuck in rebuttal
> >>> mode become increasingly nonsensical.
> >>>
> >>
> >> "CALLING H doesn't REFER to the decider deciding it."
> >>
> >> Sure it does with H(D,D) D is calling the decider deciding it.
> >>
> >
> > Nope, D is calling the original H, no matter
> > WHAT decider is deciding it.
> >
> > Duh? calling the original decider when
> > the original decider is deciding it
> Which doesn't mean the problem has a REFERENCE, because code it uses
> doesn't change.
>
> I guess you DO think that the following code make Y a referece to X
> x = 1;
> y = 1;
> Which proves your stupidity.
> >
> > Because the halting problem and Tarski Undefinability
> > (attempting to formalize the notion of truth itself)
> > are different aspects of the same problem:
> So? Where does "Because" apply here.
> >
> > My same ideas can be used to automatically divide
> > truth from disinformation so that climate change
> > denial does not cause humans to become extinct.
> >
> But clearly it isn't as you are spreading disinformation, as has been
> proven.
> > Are you going to perpetually play head games?
> >
> No, I will continue to point out actual Truth.
>
> YOU are the one playing Head Games.
>
> TO be a "Reference" it needs to always end up using the thing
> referenced, which ish't what happens here.
>
> You are just showing how IGNORANT you are of basic facts.
>
> How can you possible think you can determine what is truth when you
> continue to base you arguments on LIES.
>
> Or, is your intent to get rid of "Disinformation" by just saying it
> doesn't exist because anything we want to be true we can make true.
>
> That seems to be the basis of your logic.


Click here to read the complete article
Pages:123
server_pubkey.txt

rocksolid light 0.9.81
clearnet tor