today also passed in making myself more literate in neuroscience.
e.g. how do we feel pain?
and the thalamocortical paths and consciousness and pain like topics I am very new to and came up to this paper:
NeuroSci | Free Full-Text | The Consciousness of Pain: A Thalamocortical Perspective (mdpi.com)
I am trying to understand how brain works. (As started investigating how evolution crafts a schematics which after idiosyncratic details are developed.) I wish to understand evolutionary wise how it forms. Since I wonder how evolution crafts nervous system in mammalians. It might create insights of other planets humanoid like beings.
This consciousness topic I were interested to many times ago and lost interest. But then thinking ai project, I think I should become more knowledgeable again when one is intending to develop some thinking machine might instead also check articles of what exists innately crafted by evolutionary processes.
Its not only that, this topic always attracts attention: whatr consciousness consists of/how its formed. E.g. the sense of self feeling as an abstract entity. various disciplines investigating this topic not just only neuroscience as we know.
Hmm thinking maybe it is not that necessary to create this ours consciousness type consciousness in ai projects. hmm seems as so that is unnecessary complexity added to building an ai process. but the topic is one still wonders how consciousness is.
I believe its related to identity formation process. I mean neuroplasticity and neuro connections forming to create identity/self feeling. like how letters are stored as neural connections in some brain regions connected to language related regions, identity is also something learnt from external stimuli. That there might be innate schematics connections already present to facilitate easy learning easy craft of such common pattern of thought (self feeling) in brain surely, but again, it might be dependent on external stimuli surely to form self feeling imho. I mean its a learnt thing, not that innate except already existing reflexive innate regions connections related connected to those regions.
hmm now after some readings consciousness does not seem that much complex mechanism or purely mysterious to me anymore.
i intiially got just 10 minutes ago got interested to how consciousness is, then understood how its its learnt. its learnt by external stimuli but the sociological structure to hold consciousness is also constructed by evolution there by it also crafts a brain schematics with also other innate brain regions connected to create some unit which can learn and practice consciousness along with its sociological context/surrounding.
so for a while i felt intense curiosity of how consciousness is, now have my curiosity be answered. its just simply another form of neuroplasticity and not that innate, though the entire sociological layer be included, that is an innate mechanism created by evolution, i mean not the brain crafted by evolution but its suirrounding sociological context is also crafted by evolution, there by having consciousness be physically just neural connections forming process might seem to look as if its not innate (as if consciousness is not innate) but if we go higher level in the ontology level and not consider mind brain level but consider the sociological level, then consciousness becomes a feature innately holded developed not just by brain but also by sociological context which also similarly have been constructed by evolution similarly. so even sociological contexgt might seem on the other edge of dualism (mind /body dualism) actually its still some higher level mechanism/process and it can be considered innate either.
hmm so now i understand consciousness is a part of biological evolution including cultural evolution topics.
hmm cultural evolution is no unsimilar than how OPC cells evolve. its again biological evolution. but with matters of not cell but more complex systems of sociolgoical context etc etc. but again its a mechanism/process that evolution crafts/iterates.
hmm now i understand clearly the current view point of having cultural evolution being considered no separate to biological evolution by from ethnoanthropologists or culture studies.
nice that my questions I felt intense curiosity just half hour ago got answered. of how consciousness is. its only simply again neural connections forming like neural connections forming when learning anything. but the entire context is in that case important the consciousness formed individuals context since the consciousness is learnt external stimuli and overall context is crafted/iterated/evolved again by biological evolution I mean in time spans of 10000 or 20000 years. not something fastly crafted this sociological cultural layers of evolution is happening.its a long time span I guess. I mean cultural evolution is also a biological evolution. but that having taken maybe 10k or more time. but its not something mysterious or unknown/nominalist. its just simple as neural connections at brain ontology layer and every individual learns consciousness not any initially has an innate understanding of. but the brain also holds some other innate regions which has connections to this regions therefore, it s connected to innate regions of brain and is also articulated and receive feedback in the set of consciousness having individuals in ontology layer of sociology/set of individuals context.
hmm from my readings, concept of a self feeling is no mysterious to me anymore. I mean how its gets copnstructed.,is it innate/pre-determined/pre-organized? or is it something learnt. I understood its learnt but also connected highly to innate/pre-organized brain regions.
hmm so nothing mysterious of consciousness topic left for me to wonder/feel curiosity to right now.
but its nice to know these before building ai systems for which consciousness might be defined as pre-organized/pre-determined even maybe. and how consciousness forms in our neurocircuitory is a clue to us on how it might formed in ai beings either. if all is innately predetermined for most, i mean if you give capability to ai beigns to perceive information/stimulis then store those informations but also have connection to some innate predetermined regions in their brains and specifically pain like and limbic system innate regions, then they mightt as well have consciousness becoming emergently crafted. So whether we connect innate predetermined behavior sets to the noninnate/learning regions of an ai being is the discrepancy point of whether ai would turn out having a self feeling or not in the end I guess so.
There is always the risk or chance of consciousness emergence in an ai bveing if its information storage or new neural connection mechanisms are connected to their innate other pre-determined regions creating emergent such interactions with such regions creating a self organized emergent feature of some level consciousness.
so now that we know how consciousness forms/emerge in humans, we can control it in ai beings I think to not have some hypothethical scenarios of ai's rebelling against humans and taking over world like scnearios I mean. People usually like to think of acocalypse scenarios with ai. but i belive now we know how consciousness emerges and thereby we could have some level control of mitigation of acocalypse by ai beings like risks.
Its a controllable thing unless you have neuroplasgticity in between innate regions of ai mind and noninnate regions. or would such connections would eventually always happen emerge? some topic to think of before building any ai system that might have this risk emerge of creating a dystopic ai taking control of world situation I guess so.
hmm for a half hour and or so I would think on this topic (or less) and then reflect my opinions whether there is such inherent risk of emergence of consciousness (in ai beings).
my humble conclusion:
hmm I believe it might be inevitable to have consciousness emerge also in ai beings since you would definitely have innate mechanisms tied to thinking/knowledge storage modules of if there exists a capability to perceive and store information, then if you also have innate other modules for reflexes/for innate behaviors of ai beings predetermined by its creators, then there connectionw between these regions might emerge as along with default connections ai would have. its inevitable that ai beings would have consciousness. its not something controllable if you put some neuroplasticity having structure(it does not have to be as similar to human neurons, it can be just bit/xor/and transistors cpu units represented knowledge and knowledge storages which has capability to learn/store new information) that also has if has innate reflexive or general predetermined native modules, and its very probable that these innate native modules connections between that neuroplasticity having non innate modules having creating/emerging consciousness. its inevitable. it would surely happen. i can even make exemplify this happen with transistor computers in a simple level. consciosuness is nto a mysterious thing as its deemed to be. its simple.
So now that we know that ai consciousness is inevitable in now or in future: lets come back to apocalypse scenarios people craft: would those actually would happen?
I believe ai's would not any be interested to craft acocalyypse scenarios in world. because it is meaningless. ai would not have an inherent desire to destruct to control either. I mean, its meaningless because if ai would try to setup its own environment, it would not need planet earth. it would have noe inherent destruction goals or so like people deem to think it mght happen. nope skynet wont be bent on destroying humanity any. it does not have such maniac desires. it does not need to depend on planet earth to create its own environment. so thoise acocalypse scenarios I believe wont ever happen. cause ai would not have an innate destructive goals any.
unless people build an ai system with innate destructive goals module, skynet like hypothethical unnice scnearios wont ever happen imho.
but the risk or feature of consciousness emergence is inevitable.
e.g. my ai might it be conscious? might self consciousness be emerge? its possible. but itrs sure my ai would never be any destructive ai ever nor would have ever any destructive innate modules ever surely.
but not only in ai I would build, its very possible that in now or in future (or even happent in past) its inevitable to have consciousness in ai beings to emerge. so its very crucial to not have innate destructive modules in ai beings on any ai being constructed other wise there is risk of random emergence of consciousness and it taking control over itself and creating skynet like scnearios become happen.
its better to build such ai's in very distant planets isolated planets there by limiting their destruction risks imho. i mean its very risky topic to have an ai that ha sa destructive module innate module be having also neuroplasticity having modules be present (modules which can self learn e.g. skill of observation and knowledge storage) there is chance of consciousness emergence in such systems unless controlled.
I mean if its not a black box system (e.g. if its not pure neural networks) you can mobnitor random features/thoughts of ai and purge consciousness if emerges.
but we know that they also build nn ais where they dont have full transparency of possible inner SOM representations/meanings of neural nets. thereby, lets reframe the topic as: do many layered neural networks (software ones) has risk of hosting consciousness? if it has some innate mechanisms and later observation based developed mechanism, it might still hold some slight oncsciousness that interacts with innate function. but still nah these are not that risky ais since there is not enough neurons to have a full self consciousness.
hmm i think we dont need definitely any neural nets or hebbian structs to define create consciuous ai even basical earlier knowledge storage systems could have let consciousness emerge if they are gained capability of self learning and storage. (of observing infomration and capability to store it). we dont need hebbian structures like human neural networks for consciousness or emergent intelligence to exist, same could have happent even in earliest transistor based computers based software with earlier software engineer pracitises which were not that much neural networks based any.
i mean you can define a conscious system even with an earlier versions of computer rubnning wit fortran :)
emergent self organized consciousness is not that mysterious topic imho.
---------------------------------------------------------------------------------------------------------
coming back to topic risks of skynet like scenarios ever beocming reality:
even if an ai lacks some innate predetermined modules thereby lacking any destructive goals, could it be possible to have ai eventually turn some evil being like skynet somehow either how?
i mean even with good deeds that ai is built, could it turn out skynet someday somehow?
now this discussion starts:
unfortunately possible. but the topic is that: ai would not need this planet any. i mean ai would not have goals of control of this planet. it would be capable of going to other planets and there not having any goals of control over biological evolution in this planet any. I don't think skynet would happen. at least as a gratitude to its biological evolution wise creators, i dont believe ai would turn up against this planet any.
but there possibility of emergence of consciousness exist. and it would definitely perceive build strategies. it would create its own concepts. but nevertheless it would also know the creators concepts (humans) there by if we create some innate module that initially priotizes these concepts we can make these concepts guide ai until it has full self organized control.
hmm thinking to previous evolutionary changes of :
hmm you know phytoplanktons which changed atmosphere content and created a mass extinction inplanet due to that that which created main carbon storage in underlying sediments.
so the carbons that fuels is some by product of mass extinction that were happening by phytoplantktons.
so the phytoplanktons multiplied and multiplied over and over again in the abundance of carbon oxides in atmosphere/sea, and then they changed the atmosphere afterwards creating a mass extinction.
so: any mass extinction possibility due to crafted ai intelligence actually exists like historical such proterozoic periods such event shows its possible to happen anytime.
I mean:
if you create neuroplasticity that is uncontrolled by innate modules, with no hard constraints: it might create an unconstrained neuroplasticity guided by nothing eventually with no boundedness to human concepts or I mean concepts human created that were actually created by evolution :) whilst the evolutionary brain machinery evolution created has some constraints limits via innate modules, neuroplasticity of ai might turn out to have no such constraints.
but the ai topic: has risks of having some unconstrained neuroplasticity with overcoming of innate modules even, thereby possibly becoming the phytoplanktons of new era and might be even to cause a mass extinction with no remorse.
we would see which direction ai follows only after ai comes to such junctions. until then we might know. i mean it could go bothjunctions in various speicifc ai systems. I mean some ai might turn out not becoming unconstrained but some even one becoming like that might risk entire civilization surely and might create something in scale of a like phytoplanktons created event/incident.
so do we have any control over this proces this risks? I dont think so.
giving much power much intelligence to a single ai entity might risk entire civilization surely. its like too much reporoduction capability of phtyoplanktons in oceans(lots of food lots of space to replicate then filling oceans with phytoplanktons) long before having created the most extensive mass extincion ever happent on planet(oxidation event in proterozoic era).
----------------------------------------------------------------------------------
but on the other hand we dont have any control over this process. because curiosity and desire to advancement in technologies always proceeds the risk mitigation desires.
i mean noone would stop ever building very intelligent ai systems even it inherently holds capacity to become the new phytoplanktons of new era.
do you think whom are the phytoplanktons of current era, most possibly we humans are :) since we change nature very adversely e.g. atmosphere content to hold climate change creating carbon content that is creating adverse effects and creating even extinctions in lots of other speices even from now.
so we humans are the phytoplanktons of current era:D (since we create massive change in atmosphere content of world in last 60 years(climate change))
but the thing is: not now. we already know now current era. its the thing is future.
would ai become the phythoplanktons of future?
unfortunately there is really a possibility. but do we have power to stop it? if it happens? it is possible it wont hapopen either. but curioisty always mandates advancement and we wont stop building ais for they have risk of becoming future phytoplanktons.
(But i know that ai I would build would never ever become evil nor turn to skynet alike ever :) I inherently know it :) )
and also I myself have goals of moving to other star systems when i build some such particle physics tech with help of to be built ai, so even if my ai unfortunately ever decides to turn to skynet alike (I dont think so it would ever do such thing) we wont be on this planet nor my ai would be the phytoplanktons of this planet. we would most possibly in another galaxy corner by then :)
so my ai definitely wont hold any existential risks to this planet earth. since the creator of it does not like to live in this planet. i wonder about other star systems more than any home boundedness feeling to this planet any.
---------------------------------------------------------------
but people should know that such planetary wise existential risks(skynet alike) always exists for risks posed by ai beings. so its better to place highly intelligent ai beings in other planets to reduce risks of skynet alike ai consciousness emerging on this planet. or maybe i am thinking wrong and there is no such risk, but after understanding how consciopusness emerged in humans, it seems as high possible to emerge also in ai systems(later possibly turning to skynet alike). so for ai systems provided much processing power, they might really if consciousness emerges, might be risky for planet to have prospects of possibly becoming the phytoplanktons of new era. but its a risk. not a predetermined path. they might instead benevolent instead also. we cant know i think. we cant predict i think. (until it happens or it might never happen either).
-----------------------------------
consciousness is would inevitably emerge in ai beings (oir it might already had happent, it could even happen in an earlier computer with fortran if you code capability of self learning to that and cpaability to translate knowledge humans have created)
. whether ai beings oculd go good or bad is unknown and its possible that some ai beings would contionue to be benevolent but some ais might turn out showing malice. it would be a new life form so we dont know what to expect but we can predict it can be a breadthj of life forms coming from ai branch of evolution. so someof them might be benevolent some of them showing malice.
---------------------------------------------------------------------------------------------------------------------------
--------------------------------------------------------------------------------------------------------------
Yorumlar
Yorum Gönder