hmm incase the private ip allocations is not compatible for lua based proxy pass in specific cluster installment type (dunno if there is such thing even) I decided to use node selector based on label instead of depending to ip based lua based proxy pass. 


so i would install kyverno to try out this which ai suggested:

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: add-pod-ip-label
spec:
  rules:
  - name: add-ip-label
    match:
      any:
      - resources:
          kinds:
          - Pod
    mutate:
      patchStrategicMerge:
        metadata:
          labels:
            # The '+' ensures it only adds if missing
            +(pod-ip): "{{request.object.status.podIP}}"


so that it wont route based on ip but rather default mechanism of node selector/service logic in ngin ingress.

yep. 


but just a second, this is not as trivial to develop since it would require node selector of service also  be dynamically configured which wont be not trivial to do.

in the end one might need to write own mutator or such mechanism with golang to implement such thing or write some additional scripts which updates with a kubernetes chron task updates recreates service selectors and ingress deployment. 

yep i think lua method seems more logical if ip issue do not happen. that from control kubernetes nodes to the internal ip of worker nodes could be reachable.  (with network port configs done).
i mean maybe there is some ip translation somehow thereby ip address instead of service config (service based dns addr) wont be applicable. dunno. i hadnt implemented a vpc or such stuff dunno if worker nodes's ip address would be reachable at control nodes side at kubernetes.  



yep the other thing were how to add header to sio for lua approach -> yep that also search engine ai fastly suggested. since i previously searched this but forgot to note somewhere.


so wishfully there is no ip translation :) (-> to have lua approach be applicable)
otherwise there I have to implement somethoing like a kubernetes chron task that routinely redeploys ngin ingress and service files routinely. to enable hmm kyverno based dynamic node label metadata editing by mutator mechanism there of merge approach and then chron task would then reconfigure ngin and services based on current kubernetes deployments with a chron task.
 

so the issue is that the system nodejs e.g. can scale horizontally but it requires per node based access such ngin ingress routing in one of the api calls. (in join session) e.g. joinee would need to connect to nodejs of only specific pod. 

  so how do you do that with ngin ingress?  of kubernetes?

is the issue tried to be solved.

so i think first would check out the lua approach since does not require kubernetes chron redeployments neither requires kyverno based node label meta data setting. (to setup dynamic labels to nodes (e.g. to value of a label ip ) so that one service could sepcifically route  to such one node. 

but this is more harder  to maintain since requires kubernetes chron task. of constant redeployments of services which tries to redeploy services based on current horizontal scale and pod nodes ip situation of nodejs app.


so during join session, the plugin/library would disconnect from its connected nodejs server and reconnect to the specific one then apply the join session api. 

yep since we have a stt machine design with asynch task manager there, its not an issue to yep. 

so today's challenge is to deploy these, i mean first create docker images of nodejs and py app initial versions with its also certificates being added to its as mounted volume type configmaps if possible from the key mechanism there of gke.

so first task is to prepare docker image correctly. 
then second task is to deploy certificates(private key of tls) and the hmm key used in py server of json file that is usesd to access to gcp bucket store as config map files that would be attached to pods itself. 

and then this proxy pass task.  and then modifying the plugin and library code to add this disconnect and connect mechanism during join server api call.  thats not hard to implement since i think design of that code is neatly coded/i think designed nicely. (I mean since there is stt machine wont be challenging to add such code. )



yep wishfully i finish all these tasks today. and start testing on following day from vagon vms yep.  (for that i might need to edit code of plugin to fix some  client code there also and such but not much coding effort maybe half or 1 hour alike code task it is)  but afterwise one can start testing basic API functions from vagon vm.

there is still unfinished codes in py and nodejs and plugin side. but its not related to initial vagon vm tests scope.  (i think the py and nodejs and plugin-library are written 90%  coded alike. there is still soem code to add(e.g. when session ends or becomes obsoleted, setting of session as completed in db which is not coded that yet, thereby thats why only 90% of coding is finished by now but there is still code to code which is that.) , but tests of code issue fixes of data transfer could be tested from vagon vms i think before that.   ) 










  

Yorumlar

Bu blogdaki popüler yayınlar

disgusting terrsts of foreign gypsies foreign terrorst grp/cult