so current issues =>  


I dont know how to sample from such game db design i coded.


e.g. it always possibly iterates sequentally. 


e.g. for 1000 requests it would retrieve first n game requests right?

but it would be always same n order and thats an issue. 

think the case of course there wont be 1000 requests at a second, but hypothethically consider it the case, 

then every game would be tried to b e registered with alot people at the same time.


so that 1000 requests see the same first n games to register:S

thats an issue :S cause in ideal case, people would see sampled  n games.



hmm i think some important part of db mechanism is coded only http bindings to be coded. but this issue needs to be fixed first. 


hmm maybe i could add one more field which depicts the view count and e.g. for 1000 many requests at the same time, only 5 of them sees the same first n games alike and others see different other n games region alike.  yep. 


after fixing this issue and after later writing unit and it tests alike i think the http wise db module service would be ready.

I think i would randomly generate some virtual token to any game created from a set of tokens 


and then in query side the querier would also randomize such set of tokens when retrieving games when seeing ordered retrieved games. yep. i think i liked this method. maybe too many requests it creates but not important. 



e.g. normally games retrieval logic is something like:


for left out allocation places from 1 to 3:

   retrieve a batch of games and add:

then if result has less than 25  games then retrieves games with left out capacity > 3 and add to generate the games which would be visible in user's query.



so now would also add this static tokens assignment and such algorithm so that the queries do not always return sequentially which would by then has side effect that you query and try to register a game then you cant since 100 people do at the same time alike.  (i mean hypothetically, if there is 100 people trying to register at the same time and retrieving games) 


so this would be dynamically managed according to current active games count statistics 

e.g. i mean tokens set count. 


this would surely increase the query count to db and read count to db very much at worst case an n* token set count many much.  but thats currently what i thought of in tihs sequential db design to tackle this issue. 



yep i think i added initial set of tokens based randomization of query results thingy. hmm with initially 5 tokens.  

so in worst case when the query takes actually 5 tokens county* 4 read requests to db :S


but normally when db is populated with many games, then most posisbly it would find the token and would do in suhc case (1 token ~+1 if such token does not exist) * 4 read requests to db 

to generate the games to view in game to join. 


yep i think this manyn read requests is not very costly compared to cost of hosting a fully maintained RDS alike mysql compared to this scalable horizontal db version. I would always choose deltalake to RDS (in case if this design works actually) (have to test it, i think it would work but lets see)

i mean usually cost of hosting an RDS alike full fledge db whichever db technology is usually much higher than storing a db on buckets distributed to bucket file system.  where there is not a db process either alike.  and rather a python web service does queries to db alike.  which is the current design. 

only things to consider are making sure the buckets are in some network of the http service which adds/updates data to delta lake. 


plus i mean in this type conditions usually mysql is not the thing to go. a distributed sql or no sql db it requires.  and usually cost of maintaining such db systems is not trivial.  maybe other alternative might bhave been using elastic search as a db.  but again i dont think it might provide the transaction isolation as delta lake. hmm there is possible alternates there and there alike distributed psql etc. but again i think delta lake is most suited to this topic. and i dont care that it would do many reads. read is not an expensive thing. i mean compared to maintenance of a db process. wahtever it is either mysql or whatever.  

so in the ideal case, this design should work. but maybe it wont work either. lets see. test and decide. but i think it would work performantly most possibly.  

i know its unusual to have some rest query to a db then it does multiplexes eg. does 20 db read queries to just gather some data. but i think defoinitely its still less expensive then fully hosting an rds instance alike mysql etc. 


so this token based nonlinearization mechanism although multiplexes read requests, its i think still the design choice to select. although its unusual. but  i think maintenance of a mysql db is much harder so forget that a query  might turn out to 20 db queries in worst case :) its neglieable compared to msql maintenance or maintenance of elastic search etc etc. 


this is at least something instead of  always retrieving to same first n entry to 100 users if 100 years try to join game it would be a mess alike.    

i think maybe elastic search also has transaction isolation capacities alike deltalake possibly. but again why  do maintain a fully present data base process all the tiime, whilst it could be a rest service with one core and only scales when load is high to more cores etc.  and the db is rather then  a file system in buckets files. 


considering all the possible issues that can happen in any database and database maintenance tasks etc, i  found this approach less challenging to maintain.  then i wont have to be an elastic search maintenance tasks doer with this design. or mysql whatver tech there.  


so this design is nice in these aspects imho.  although it generates alot reads.


yep at last initial db layer code is finished (not finished actually, i need to add ut/it tests but before that i would also finish the http code there) then now would finish the python http bindings and API definition/code to db requests etc alike thingies now.  


I am so happy that we can have now. with deltalake a db process with python without having have to maintain anyn db process alike. 


cause any db solution is usually very expensive imho (I mean to people that are not startups or company etc i mean hobby project developers)  i mean i remember as like that. let me check to confirm. 

hmm turns out not that expensive. but still i find the bucket based db thingy the least expensive one. i think there is something like for the scaled data, its becomes many times cheaper. i mean when you scale out rds with larger instances,  then the pricing of mysql instance very increases. but comparedly, in python version adding more buckets is not any costly (and its like free horizontal scaling) and deltalake python process(to query db etc)  scaling is also not that costly that much imho.  


hmm i think initially i figured out its like ~30 dollars per month for costs of a single cpu 1gb memory etc  so  i dont know how much nginx itself costs (loadbalancer) but then there would be least 2 such nodes constantly present => e.g. the main nodejs node and this python http node counting to 60 dollars per month and then the 0.l management costs per hour is tackled by the 74dollars free tier thingy. 

so would cost me something like ~70 dollars per month i think this original nonscaled out version yep.  if it scales out the cost of it would be irrelevant either.  


i mean for instance if such configuration can host 150 sessions at a single time per second, (1 cpu 1 gb memory) 

means 150 * n gamers for this 2 node of 70 dollars cost environment or so.


where n would be like every second it wont be played by gamers all time so there is this i think that such environment could host 450 gamers amonth considering that it wont be allocated all time by gamers except peak hours and peak hours might differ so.


so from each gamer, we have 3 products (3 turret types and two of them is 2 dollars one of them is 1 dollars alike) 

if all the gamers got those, (possibly not) lets first calculate income if all gamers bought.


450 *  (5 /2(taxes)  --> 1125 income alike -> so could then also if we added game passes. e.g. 5dollar game pass for a specific game version and if 450 people also again bought it, then it would be like 2250 dollars income alike.  (although most possibly not all gamers would get all turrets and there might be gamers with free turrets. but per month, I would surely generate an game pass  event where there is tickets alike 5 dollar tickets alike so that i could generate income alike and would also add new mars rovers/turrets types alike with more professional design this time (I mean i would consult help of a designer team this time for least PBR  thingy  (realistic textures alike cool tank textures alike world of tanks games tanks textures, my textures of mars rovers are very unrealistic and very low quality compared to. )) )  hmm i think i would have both free game passes (i mean requires some accomplishment e.g. complete game task 1 etc or score bla bla to be eligible for game battle pass for a game version alike) or also it wouold have a ticket thats in game purchased so that it could also be entered like that. so game passes would have suhc and such logics. 

original game would be free and it would be also free to play game passes but game passes would require score and level accomplishments or else it wouold require buying ticket. yep. 


so that it would be alike 2250 - 70 => 2180 alike remaining income. and if it scales out form 450 users to 1000 users then i would have ~4500 income alike per month. and if it scales to 10000 users I could generate a 45000 income and if it scales to 100000 users than 450000 income alike. 


so the original idea is that it pays it bills that i mean for initial 70 dollar costs, initially i might have to pay that bill but if it scales to 450 users it already creates more than enough surplus to pay the required montly cost for selecritng the kubernetes based design versions costs.

most possibly it wont scale out to 1000 users even i think butone can never know maybe it might scale out get popular i dont know. lets see. 

first lets finish this tasks and cplusplus plugin tasks and finalize this plugin. then fix rmaining issue at game v1 version (there is one issue and one issue related to music etc so i think 2 main issues remains there) otherwise v1 version is complete (other than this plugin I work to add to so that i becomes also playable on internet instead of requiring lan )



------------------------------







yep there is still lots of tasks, but i think basical methods of http api for create game and query games api and join game db api is completed.


i were not expected to have array_append method be even working in deltalake python library in merge table commands.


so i think there is alot tasks to do for this db module. first of all integration tests/ut tests are missing.


i only tested with hmm curl http requests (I directly initially tested with curl and db api tests also like that) and had not tested all functions yet but only tested main functions initially (e.g. it were unclear to me if array_append would be even work on python library, yep it works also there) 


so things to do remainingly for the python based game db api ->  hmm test more and refactor and add some ut tests definitely and some it tests also. 


yep.


so today I work to add some ut tests to test functions then would test it tests manually with curl for the db api since i do laziness of not writing it tests and http api having test cases currently.  (would manually tests the http api but for db tests I write ut tests ) (normally when worked to companies, i also written test case for http api etc but currently wont do it since http api is simple and i dont think it requires tests since most simple http api could be. there is no paging no streaming of data no session. so i wont write ut/it tests but manually test with curl alike.   


hmm but surely the db module requires through testing with ut tests. that i would write ut test cases. but for http api i wont. 



hmm for nodejs server i dont know if i would write node.js it tests. its it tests would be challenging to setup but i might write simple it tests for datachannel with webrtc using the nodejs server there. 


hmm if i finish the python module there today i would then resume the cplusplus module writing tasks which is a challenge also either. yep. lets check that later today after this initial http module is finished.


hmm I think i would be able to finish initial version of plugin by some day following week.  but today would resume cplus plus and nodejs server tasks. 

i think for cplusplus module most challenging is the hmm not the webrtc setup (thats the simplest task of that plugin task) any but the delegate server client logic to be added there yep. but i think would be completed by this weekend possibly these tasks so that i could resume game's remaining isuses tasks alike. 


for to complete v1 version then to upload to game platform web page alike.  

























Yorumlar

Bu blogdaki popüler yayınlar

disgusting terrsts of foreign gypsies foreign terrorst grp/cult