yayy would run a pretrained neural network to do both constituency and dependency parsing from an nn that uses pretrained encoder/decoder network now.
but also needing to revise the machine learning other methods e.g. visualization libraries. and i also forgot logistic regression/ logit classification maths how it were hmm.
hmm lots of topics i forgot of neural networks since i had not been doing machine learning engineering type tasks for a lot time.
I need to also study remember the transformers internal query key value based attention mechanism but not now. initially would just check the dependency graph parsing results.
but this language modeling integrated layer model could be used to do even also ontological tasks hmm. nice. i mean it could be just not this specific task of dependency graph prediction but also ontological predictions could be also accomplished maybe.
e.g. take a verb: and take an ontological signature. if we encoded ontological signautre hierarchically to embedding definition of utterance's word embedding as with decoded part. and then we could understand from the universe of language that ontological associations between verbs and ontology classes. as a predictor. hmm i mean beside topological to be defined representation of verbs, we might extend such info with also such semantic ontology associations with such unsupervised approach if there were a unambiguous ontology hierarchy for eacy utterance. unfortunately thats not the case. so associating ontology classes associations of verbs might not be an easy task. but just a minute maybe there propbank might help plus wordnet to find the ambiguity resolution of utterances with considering the word embeddings space of ontological hierarchies definitions each ontology level's divergences as a vector space and creating some orthogonal space for completely unrelated ontology hierarchies and creating a bayesian mixture model for ontology structure definition for estimating the probability of each ontology hierarchy vector and thereby would it be possible to train but that would need a mixture model for these proibabilities fo different ontology hierarchies to be mixed in a sense that is revertable.
so maybe its better to have a longer word embedding asnd store all such ambigious ontology hierarchy signatures separately in such case and if there is no ambigiuty and only one singature being present some method to have those second third fourth ontology signature parts being zeroed. and the also existing ontologies could be dampened or reverse dampened along with their bayesian probablity for initial ambiguity predictions. but that might not be too much.
hmm via such method of weord embeddings of ontological structures, then we could find ontological classes verbs has in context of. that beside a topological description of a verb mathematically, the predicate object subject part's verbs ontological spaces would be with such nn be avail.
hmm i liked this idea. of having both mathematically constructed topological definition of verbs and its subject and predicate's ontology classes sets. this latter would be coming from such nn described with such pretrained BERT nn model.
but this is not exactly still the interpretation precision we wish to achieve. we wish to achieve that predicate context very neatly semantically and math topologically defined. this would be manual work to define these in such topological ontology.
wov there is too much work to do :) but its very fun to study every day to this project :)
hmm so the resultant interpretation's RDF wise topological definitions part would be the last and most important task. i mean manual tasks of entering topological definitions of verbs and nouns. for a vocabulary set. this would be alot effort but :
its like with this last part, its like trying to tie up thousand many integrated parts type project it would be. i mean if we consider the topological rdf defintions as integrated parts. it would be like defining such initial set and then ai would discover other such ontological definitions itself or ask if can't find/compute itself.
its like trying to build a thousand many integrated parts. i think this topological RDF part of the proejct is one most hardest parts it would be or effort wise would be really alot effor.t but after that, it would have a knowledge graph wise represented readable interpretations or a cartesian/dualist wise defined universe definition/interpretation.
so lets start running the example neural network project now and check results how it performs in dependency graph generation task.
--------------------------------------------------------------------
yayy i might revise some ml topics laterbut first check the bert pretrained networks accuracy/recall/precision in this dependency graph prediction task.
Yorumlar
Yorum Gönder