Discussion about this post

User's avatar
///'s avatar

I would love to read a reason for why the first question is not a stupid question.

Expand full comment
meika loofs samorzewski's avatar

I don't have a problem with the lack or illusion of consciousness, but the reduction of agency to an almost economic model is a possible mis-framing. I will look at the paper if I can.

Quick thoughts: ① Is a self-driving car going anywhere at all when it moves.

② And I will have a think non-consciously or not. I suspect that morality, if it is an outcome of an urge that also produces the self worlding among others worlding their selves, is not covered here.

③ For a counter example (thinking ahead of reading the paper) consider the conscious agent who has no morality, I.E the narcissist whose lack of empathy precludes them from selfing the world among others doing the same, because they cannot distinguish between the self and the world. They could tech-tree grind their way through deontological rules like a good gamer, until they get to the top of the hierarchy they percieve (and often create to suit themselves) and become the world they truly are/empire. How does this differ from an unconscious moral agent.

LLMs map what anthropologists call 'social learning' or at least the record of social learning (not intelligence). And the multi-modal one-shot success stories support this view I suspect.

PS I regard most analytical philosophy as tech-tree grinding gamers put effort into, and continental philosophy as fan fiction.

I prefer the other when reading the other one.

Expand full comment

No posts