# 最新版本最好是自己搜一下确认, 简略了下可看到两个 chart, 先参考官方用第一个. (第二个等下看看) helm search repo graphscope NAME CHART VERSION DESCRIPTION graphscope/graphscope 0.9.0 A One-Stop Large-Scale Graph Computing System ... graphscope/graphscope-store 0.9.0 Chart to create a GraphScope Store cluster # 简单看看 chart 的目录结构 (helm pull即可直接下载) graphscope ├── Chart.yaml ├── README.md ├── templates │ ├── coordinator.yaml │ ├── _helpers.tpl │ ├── NOTES.txt │ ├── role_and_binding.yaml │ ├── service.yaml │ └── test │ └── test-rpc.yaml └── values.yaml
# 配置 k8s 的地址, 确保状态正常 (最简单就是与 k8s-master 一台机器, 此略)
# 安装 chart, ⬇️ gs 是一个别名, 你也可以用版本号等 helm install gs graphscope/graphscope --version 0.9.0 # 返回内容 NAME: gs LAST DEPLOYED: Mon Dec 6 10:31:20 2021 NAMESPACE: default STATUS: deployed REVISION: 1 NOTES: The GraphScope has been deployed. (后略)
# test/list/show helm test gs # 应该是执行的模板里的 test/test-rpc.yaml, 请求端口看是否存活 # 返回 TEST SUITE: gs-graphscope-test-rpc-service Last Started: Mon Dec 6 10:41:11 2021 Last Completed: Mon Dec 6 10:41:22 2021 Phase: Succeeded
A jupyter-lab is shipped with GraphScope, get the jupyter URL by executing # 注意这里 podName 应是 gs-graphscope-coordinator-xxxUUID (此处日志提示应该修正一下 - TODO) 'kubectl --namespace default logs gs-graphscope-coordinator -c jupyter'
# 默认 IP 已经堆外暴露了访问权限, 可以直接尝试访问 Then replace the '127.0.0.1:8888' to '${NODE_IP}:30080'.
# 这里配置了最关键的协调器, 它是 Deployment 类型 (应该是常驻, 类master) coordinator: service: type:NodePort image: name:registry.cn-hongkong.aliyuncs.com/graphscope/graphscope # Overrides the image tag whose default is the chart appVersion. tag:"" resources: requests: cpu:1.0 memory:4Gi limits: cpu:1.0 memory:4Gi extraEnv:{} readinessProbe: enabled:true initialDelaySeconds:10 periodSeconds:10 timeoutSeconds:15 failureThreshold:8 successThreshold:1 # Waiting GraphScope instance ready until reached timeout. timeout_seconds:1200
# 这里配置实际的 worker 节点, 并默认限制了 CPU 和内存使用 (可自行调整) # 我调整为 3 节点, CPU 限制 10 core (ReplicaSet 类型) engines: num_workers:2 # Available options: INFO, DEBUG log_level:INFO image: name:registry.cn-hongkong.aliyuncs.com/graphscope/graphscope # Overrides the image tag whose default is the chart appVersion. tag:"" resources: requests: cpu:0.5 memory:4Gi limits: cpu:0.5 memory:4Gi
# 这里单独配置了存储节点 v6d, 它也是 ReplicaSet 类型. 但是不太清楚内存如何使用, 是否太小? vineyard: # When `vineyard.daemonset` is set to the Helm release name, the coordinator will # tries to discover the vineyard DaemonSet in current namespace, then use it if # found, and fallback to bundled vineyard container otherwise. # # The vineyard IPC socket is placed on host at /var/run/vineyard-{namespace}-{release}. daemonset:"" image: name:registry.cn-hongkong.aliyuncs.com/graphscope/graphscope # Overrides the image tag whose default is the chart appVersion. tag:"" resources: requests: cpu:0.5 memory:512Mi limits: cpu:0.5 memory:512Mi ## Init size of vineyard shared memory (这个配置和上面的节点 limit 关系是?) shared_mem:4Gi
# 单独的 etcd, 版本似乎不是单机的3.15, 这个似乎默认起3pod, 不知道可否配置数目 etcd: image: name:quay.io/coreos/etcd # Overrides the image tag whose default is the chart appVersion. tag:v3.4.13 resources: requests: cpu:0.5 memory:128Mi limits: cpu:0.5 memory:128Mi
[cluster:377]: Create GIE instance with command: /home/graphscope/.local/lib/python3.8/site-packages/graphscope.runtime/bin/giectl create_gremlin_instance_on_k8s /tmp/gs/gs/session_izynzsgw 4422182077725296 /tmp/graph_w1UFK8u2.json gs-engine-gs-d4wqq,gs-engine-gs-ks94z engine gs-graphscope-coordinator
[coordinator:693]: build maxgraph frontend 10.244.0.248:59580 forgraph 4422182077725296 /work/analytical_engine/core/object/gs_object.h:65] Object graph_w1UFK8u2[LabeledFragmentWrapper] is destructed. /work/analytical_engine/core/object/gs_object.h:65] Object graph_w1UFK8u2[LabeledFragmentWrapper] is destructed.
Close GIE instance with command: /home/graphscope/.local/lib/python3.8/site-packages/graphscope.runtime/bin/giectl close_gremlin_instance_on_k8s /tmp/gs/gs/session_izynzsgw 4422182077725296 gs-engine-gs-d4wqq,gs-engine-gs-ks94z engine
graph LR
%%subgraph k8s cluster
a(coordinator - Master) --RPC--> b(etcd - PD)
a ==RPC==> e
%%-.RPC?.-> d(storage - v6d)
subgraph gs-engine-pod
subgraph v6d-container
d(vineyard)
end
subgraph engine-container
e -.IPC.-> d
d -.IPC.-> e
e(GSE - OLAP)
f(GIE - OLTP)
g(GLE - AI)
end
end
b-.RPC.->e
%%end