yayyy studyign starts now again. 

hmm. 


yesterday i had nto had left energy to start studying dependency graph creation part since most of time went to trying to figure out the computation efficiency problems of building the initial ttl from wiki data in a table. then later in neo4j format.

so today i feel its now time to start checking the dependency graph creation enabler neural network.


hmm i wish to add some also topology studying today hmm yesterday were like being lost in the original raw table of not processed ttl data. of testing various partition schemes. hmm I had partitioned it like 20K rows at most type partitions. its it seems as apache jena is having problems parsing that much lines i mean it parses but slow. i might do further do init jena with like 1000 likes of data from there but with also either spark wise windowing (tumbling window type ) of every such 20000 records entries. or something like that etc. I would also check how much time cpu time it takes for data to be processed by my local pc of 20k lines its just too much it takes time in remote spark somehow. i investigated whether i should use aws lambda but there is too much data for that either. hmm i would figure out a method. actually its not mandatory to store the db like this? there are libraries which can query data but i think i need to store this ttl data since i would do import to neo4j rdfs later etc. (not as one neo4j db but many) 


so it took alot of time to figure out a method to handle this conversion of ttl to subject object predicate table since i mean its too much data it could be solved with a lot task nodes but that would be alot expensive. so i try to figure out minimal computation budget on such huge data processing task. this initial task is one of the most huge data processing task of project initially


yayyy every day its so fun to open the pc to start this ai project at these hours.:)


aha on monday its holiday yayy .so i have 1 more day to this project by then yayy on weeekend :)


------------------------------------------------------------------


hmm so today is my first machine learning environment setup time on aws it would be. some pytorch library i seen i would try to do create dependency graph information. 

and might need to integrate some lucene wordnet library to also create synonymous literals clustering to start defining them topologically as much as possible. then their similarity also would be present in word embeddings defined by word embedding solutions beside such a topological definitions. 

then i could with both those information could retrain with such neural network types to find functional category theopry wise relatrions but that i had not studied that field yet i would need to study first. hmm i would need to add lots of reasoning modules. 




Yorumlar

Bu blogdaki popüler yayınlar