Final Update: Sunday, November 18th 2018 20:26 UTC
We’ve confirmed that all systems are back to normal as of 11/18/2018 20:16 UTC. Our logs show the incident started on 11/18/2018 02:30 UTC. Sorry for any inconvenience this may have caused.
- Root Cause: The failure was due to unknown issue with one of the ScaleUnit failing with timeouts in routing requests to correct ScaleUnits serving customer account.
- Chance of Re-occurrence: Low - we have suspended the ScaleUnit as it not serving any external customers currently.
- Lessons Learned: We are working both minimizing resource-intensive activities in our post-deployment steps, and are also working targeting monitors specifically to detect post-deployment issues in the future.
Sincerely,
Sudheer K
Update: Sunday, November 18th 2018 20:01 UTC
Our DevOps team continues to investigate issues with Packaging service. Root cause is not fully understood at this time, though it is suspected to be issues with one of ScaleUnit routing delays. We currently have no estimated time for resolution.
- Work Around: No work around
- Next Update: Before Monday, November 19th 2018 00:15 UTC
Sincerely,
Sudheer K
Initial Update: Sunday, November 18th 2018 18:48 UTC
We're actively investigating timeouts in accessing Packages in all regions. Customer may experience timeouts in accessing packages in Azure DevOps. There is not work around for this issue for now.
- Next Update: Before Sunday, November 18th 2018 19:20 UTC
Sincerely,
Sudheer K