hmm i see that spec has changed to becoming priotizing batch logic, that batch update of opengl data instead of many opengl calls.

hmm it became a very much virtualization enabler, before it were just so API wise, now any further non opengl based ways are even possible. i mean it separated the drawing to become open to any possible any different coding pattern (i call that virtualization) it turned out like a black board for doing any type of rendering. before it wernt like black board but were like a strict api interface. it changed from such closed structure to letting people do whatever rendering ways they want. this buffer topic really looks very virtualization enabler part.  as told in my crappy terminology, i call virtualization for environments that enable modification of how things are established.  i know its used in other meanings in usually. but i used as a means to many possible different rendering instantiation capabilities on the provided baseline. so pc being the initial virtualization layer, the OS yet another than there newly i see this standard turned itself to a virtualization enabler, that you could do a broader set of major changes to usual API call based definitions structures.

I want to list these patterns of for instance this buffer based interaction mechanism. to a programmable programming unit like graphic card. how we get over limitation of just API based interface and turn the programmable side to a virtualization environment. this buffer sharing enables i think such ways that creates more usage patterns than usual API based ways.  even if the languages is static, the paragraphs written by languages are defined by the people who use language. i mean that even the shader language is static limited,  creating paragraphs and conveying that means a broader interface thats performant. so that we dont get limited to world inside the GPU.  so the non batch API call ways were limiting the created paragraphs richness. since there were bottleneck in communication. if an environment enables even modification of how its services are provided, i can call it a virtualization enabler.  i think this broader communication with GPU turned it to such environment. cause we usually before all used to follow the same communication pattern with GPU. now there is the language and no communication bottleneck, still of course it might be plagued with the cache copy operations cost. or the sync operations sequentializing impact. but similarly. we cant talk two sentences at the same time but we could define very rich paragraphs. i mean i think virtualization is more important. that this architecture felt as it now holds a programmable unit like computer organizations topic of processors, like machine code. and we are left to do build whatever programs on it. i mean before with strict API ways, it didnt felt like any virualization that normal processor architecture of PCs has. now with that buffer add ons, it feels as now its really a virtualization enabler type of environment like processor designs.

i wonder if this buffer sub data map methods really  clearly can use just block the target region  with sync methods but every part of the buffer be intact from cost of serialization. it must be like that. if there is range capability it must have been devised like that.

i would like to analyze the lmitations or limits of virtualization with finite state automata theory in sometime. for regarding what an instruction set of a language systems become more capable and what kind of algebra units/access patterns  make it lesser  virtualized. for instance what i meant is, if we think in Turing machine perspective, there with those algebraic methods defined, you can do things in some speed in basis Turing machine's definition, but with batch share of some other in tandem computed results, and could that be something even if we think in physical side it would make it faster the overal different form of turing machine (the graphics processor) but what else could be happenning in its operations or what kind of operations there might turn it to a faster turing machine in overall. i dont know if finite state automata dealt wiht such metric system. cause i want to conceptualize my own crappy virtualization level concept to technical terms. degrees of virtualization an instruction set, its algebra could provide to us, its analysis. and creating new patterns on that or creating a measurmeent metric for that for to tag nicer virtualization level definitions. hmm. maybe later lets read those topics. i remember very tiny bit of it, but dopnt rememeber alot these finite state automata topics. it kind of intersects with optimization area as visibly. maybe trying to merge designs of such systems based on optimizers simulations results might be also a nice thing to do. or it must be done like that also. but having constructing a metric for virtualization capability would be nice i think. for to make it transparent and for to be able to alter it. a computational unit's virtualization metric i call this as. as i read 4.5 standard, it felt this intuitive metric has really got to very nice place and that we could dont be limited by the APIs of opengl in this versions. so i think i felt as for computational units like graphic cards or others, or for algebras, there could be defined some metrics for their virtualization capability sets for which those conce4pts i should check from abstract algebra book to reach a possible definition set for what kind of patterns of operands should be defined investigated for such domain.
ok so sometime lets investigate this topic also. since i guess it could be optimized by the optimizers. maybe in future we would have such optimizers modifying the architecture dynamically in runtime instead of the static runnables default optimizations inside runtime. the runtime's behaviors or operands set might be changed or user could provide many versions of programs aligning with different architectural patterns of the instructional set of the base language. and or user doesnt provide that but it gets autogenerated. then the language's instruction set or turing components get also changed during runtime. so its like if we had metrics for instruction sets algebras or usage mechanisms, we might hjold chances to optiomize them in realtime also. hmm maybe java run time does such things.though when quantum computers happen, i guess this topic needs to be thought on such domains. i first thought these discussion would be invalid first for quantum computers, then thought its also valid for them? if we had metrics to any metal or os layer there for its bare language/algebra. we could then do mapping from computer program to executable in more optimized ways (as if quantum computers would need any optimizations:P ok they might still need for written compilers created those machine layer code's generation side? nope i feel as qcs wont need optimizations :P anyway. so this all paragraphs became kind of pre-qcs related. ok i should check qcs algebra sometime )


ok my sleep came. i guess i would choose to wake up early and continue.

Yorumlar

Bu blogdaki popüler yayınlar