yayy more than half of the lecture ended so I would give some rest time. 12 lectures remaining. but this were an introduction to computational linguistics. to revise basic algorithms like naive bayes /SVM etc etc
So more important is the PTB standards knowledge being fully learnt. and algorithms regarding parse trees/constituency information.
Then i need to asap revise topology to craft very initial very basic topological RDF to translate these to and then start defining various POS topology representations then move to verb topology representations study. initially very basical topological representations versions. Not would be very studied RDFs initially but a baseline to build more studied to versions afterwards.
then would need to craft constituency relations between such RDF s of paragrapsh/linkages between.
that could be also accomplished with pretrained models with some unsupervised modelings/metrics. even not being very accurate.
I think the challenge in this task is mostly finding linkages in between different paragraphs an ai reads.
since i mean transformer model would only read a single paragraph at once at most. but context might even related to 2 paragraphs ago paragraph. so this is not something to be handled by just usual attention mechanisms of transformers imho. unless model parameter size is increased alot.
so this part of algorithm design would be interesting. of course it wont be best algorithms initially. initially some experimental algorithms for those linkages/constituency would be devised even might not be very best algorithms since i am not an ai/ml expert but still it would function/work.
--------------------
hmm need to spare a hour or more to finish studying to this course. then would continue learning of parse trees/TBs and would focus on resolving ambuigity of derived parse trees versions with also ontological knowledge coming from wikidata nto just frequentialist based interpretation of neural network but that would be also considered. so i wish neural network provides multitude of parse trees. to decide by ontological data from wikidata. I guess I need to study more to these parse tree derivation topic cause this is the crucial point on devising the knowledge graph (representations/knowledge graphs that ai would be able to easily do inference queries to )
so it would mix both hebbian worlds with both categorical ontological worlds.
i mean when we listen a sentence, we derive the correct parse tree by via our hebbian neural networks from sentence structure frequentialist probabilities based constructed internal neural networks plus also other common ontological knowledge of sentence parts.
My ai would do the same, neither would leave the task of interpretation/ambiguity analysis/tree derivation to only frequentialist neural networks nor to only ontological representations but would mix those two worlds to create its representation as knowledge graph of interpretation of read sentence.
--------------------
Yorumlar
Yorum Gönder