fix: use kube logger with status waiter#31717
Conversation
|
Hello @AustinAbro321 Inspired by my snipet. I tested your code, and it is working as expected: Edit: I am wondering about the duplication in logs of deployment object. Here is during a |
7708d02 to
743f851
Compare
|
@benoittgt Yeah the logging strategy can be improved, though I believe it's a different scope than this PR. Right now it's firing a log every time the statusobserver is called. The benefit of this is that a user will always know at least one resource that they're waiting on. The downside is that it fires a ton of logs and the more resources in the chart the more logs it writes. What do you think of a separate goroutine that will log a not ready resource every X seconds and removing the log from the status observer? This would mirror the behavior of the old Helm status logger. I could spin up a separate PR for this |
Completely agree.
I think it's a good idea. But maybe we should get other points of view first, to avoid unnecessary work for you. |
da9b162 to
5c50f3e
Compare
|
@AustinAbro321 -- changes look good, can you please rebase / fix conflicts |
Signed-off-by: Austin Abro <austinabro321@gmail.com>
5b7efdf to
63b40a7
Compare
|
@gjenkins8 @TerryHowe Changes are rebased, codeql seems to be failing for unrelated reasons |
Signed-off-by: Austin Abro <austinabro321@gmail.com>
There was a problem hiding this comment.
nice work, lgtm. though is this a feature or a bug that we didn't catch in the earlier fix: #31411?
@scottrigby Well observed, I think the bug was introduced here: #13604 And yes, could have been fixed in #31411 |
What this PR does / why we need it:
Currently unless the global slogger is set, SDK users will not receive any logs at all from the waiter. This is critical as several minutes can go by without any feedback to users. Relates to #31585
If applicable:
docs neededlabel should be applied if so)