英文:
Concurrency-Safe updating of labels with kubernetes-client
问题
我在我的Kubernetes环境中有一组作为“缓冲资源”(通过缺少某个标签来识别)的Pod。
在我的应用程序中(使用kubernetes-client),我想检查是否有可用的缓冲资源,如果有,就添加一个标签,以便它不再被考虑用于其他请求。
然而,由于并行性,一个被标记为缓冲资源的Pod,可能会被多个线程同时预留,从而在应用程序中引发各种问题。
在不锁定发送到Kubernetes的请求的情况下,是否有一种安全的方法,只有在标签的键不存在时才添加标签(否则操作失败)?
我正在使用io.fabric8.kubernetes.client
,更新标签的代码大致如下:
kubernetesClient.services().inNamespace(namespace).withName(resourceName).edit() //
.editMetadata()
/**/.addToLabels(Collections.unmodifiableMap(labels)) //
.endMetadata() //
.done();
在与Kubernetes API通信时,处理并发的最佳方法是什么?
编辑:我看到K8s有ResourceVersion
,但根据我的初步测试,这似乎不起作用:
以下查询不会失败,而是成功执行,并分配一个新的资源版本:
kubernetesClient.services().inNamespace(namespace).withName(resourceName).edit() //
.editMetadata()
.withResourceVersion("13213414141") // 绝对不匹配现有的版本
/**/.addToLabels(Collections.unmodifiableMap(labels)) //
.endMetadata() //
.done();
编辑2:kubectl的等效操作类似于:
kubectl label pods mypod foo=bar --namespace my-name --resource-version="313"
这将正确地抛出错误:“对象已被修改,请将您的更改应用于最新版本,然后重试”。
英文:
I have a set of pods in my kubernetes environment that are acting as "buffered resources" (identified by not having a certain label).
In my application (using kubernetes-client), I like to check if a buffered resource is available and if so, add a label so that it is no longer considered for other requests.
However, given parallelism, a pod that is marked as a buffered resource, might be reserved by multiple threads at the same time, leading to all kinds of issues in the application.
Without locking the requests being made to kubernetes, is there a safe way to add a label only if its key does not exist already (and fail otherwise)?
I'm using io.fabric8.kubernetes.client
and the code to update labels is more or less:
kubernetesClient.services().inNamespace(namespace).withName(resourceName).edit() //
.editMetadata()
/**/.addToLabels(Collections.unmodifiableMap(labels)) //
.endMetadata() //
.done();
What is the best approach to handle concurrency when talking to the kubernetes api?
Edit: I see that k8s has ResourceVersion
but from my first tests this does not seem to work as expected:
The following query does NOT fail but succeeds and even assigns a new resource-version:
kubernetesClient.services().inNamespace(namespace).withName(resourceName).edit() //
.editMetadata()
.withResourceVersion("13213414141") // definitely does not match existing one
/**/.addToLabels(Collections.unmodifiableMap(labels)) //
.endMetadata() //
.done();
Edit2: The kubectl equivalent is something like:
kubectl label pods mypod foo=bar --namespace my-name --resource-version="313"
which will correctly throw an error "the object has been modified; please apply your changes to the latest version and try again"
答案1
得分: 0
你可以使用JsonPatch的test
操作来保证并发安全性。类似于以下内容:
kubectl patch jobs/pi --type=json --patch='[{"op": "test", "path": "/metadata/labels/locked", "value": "false"}, {"op": "replace", "path": "/metadata/labels/locked", "value": "true"}]'
这将以原子方式执行。
io.fabric8 Java SDK不支持补丁操作,但官方的kubernetes-client/java库支持。
英文:
You can use JsonPatch test
operation to be concurrency-safe.
Something like
kubectl patch jobs/pi --type=json --patch='[{"op": "test", "path": "/metadata/labels/locked", "value": "false"}, {"op": "replace", "path": "/metadata/labels/locked", "value": "true"}]'
it will done atomically.
io.fabric8 java sdk doesn`t support the patch operation, but the official kubernetes-client/java does
专注分享java语言的经验与见解,让所有开发者获益!
评论