If I'm reading this right, this supposes that there are ways of thinking, in the sense of processing information, that are fundamentally different from logic.
There are obviously frames of reference that differ from the human, but no one has ever demonstrated that there are ways to process information that don't follow the mathematical principles we've established. I don't think that will ever happen, which basically means that superintelligences will basically be faster minds (albeit without any of the baseline perspectives humans have, which will absolutely be alien to us).
I was touching on two distinct things: Culture, and then intelligence. In culture, I think there will be a gap between the human the alien. There will be a point where posthuman intelligences, from a modern perspective, will cease to be human. A lot of this is touched on EP, without being fully explored. Copying minds, branches and forks, editing, etc. That will create a different set of baseline expectations and mores that will diverge massively from what we consider human. The distaste that Shrieking Banshee expressed at the start of this conversation is a good example. But as I pointed out, those posthumans will probably still consider themselves human, perhaps because of inheritance in case of completely artificial lifeforms, or because of direct continuity of identity, for things like copied brains.
Intelligence is different, but related. You seem to think that any future intelligence will be comprehensible to humans, because of the laws of logic. I don't agree with that all, but I'm not proposing any kinds of alternate logic either. Superhuman minds aren't simply faster, they also have a greater capacity, the ability to work on vaster amounts of information at the same time. And intelligence isn't a simple factor of speed, or even of size. It's also structure and order. It's the subroutines, the neutral pathways, the complex set of tools that are built to solve various intellectual tasks. And not all of this is easily explained, or comprehensible. To give an example that's realized today, consider machine learning. Basically, you throw a ton of examples at an algorithm, and it figures out on its own how to process to the data. But the trick is, it's very hard for humans to reverse engineer the criteria that are ultimately used to identify faces, or detect fraud. At some point, vast intelligences of great speed and capacity that are capable of self-modification and use oblique, iterative methods to come to conclusions will become largely incomprehensible, to unaugmented human minds.