Unpacking GitHub Actions Delays: When Self-Hosted Runners Stay Idle and Workflows Queue, Impacting Development Efficiency

In the fast-paced world of software development, continuous integration and continuous delivery (CI/CD) pipelines are the lifeblood of development efficiency. When these pipelines stall, even for a few minutes, the ripple effect can significantly impact team productivity and morale. A recent GitHub Community discussion, initiated by user shurikovyy, brings to light a particularly frustrating intermittent issue: self-hosted GitHub Actions runners appearing Online/Idle, yet workflow runs remain stubbornly Queued and unassigned.

Developer frustrated by queued workflow while self-hosted runner is idle.
Developer frustrated by queued workflow while self-hosted runner is idle.

The Mystery of the Stalled Workflow

The core of the problem, first noticed around February 2026, involves GitHub Actions workflows failing to start despite an available self-hosted runner. The workflow run stays Queued, with no runner assigned (indicated by runner_id=0 in job details), even though the runner is reported as online and busy=false via the GitHub API. The critical observation is that these queued runs do not resolve themselves; they only get picked up after a manual intervention, such as triggering a re-run, using workflow_dispatch, or pushing another commit.

A Case Study in Frustration

Shurikovyy provided a detailed example from February 10, 2026. A push to the develop branch created Run 21863545638 at 11:45:00Z. For approximately 7-8 minutes, this run remained queued while the designated airflow runner (version 2.331.0, running as a systemd service) was online and idle. The runner logs confirmed it did not receive any job request until 11:52:52Z, after a manual workflow_dispatch was initiated to unblock the process. This clearly demonstrates a disconnect between GitHub's scheduling system and the runner's availability.

Rigorous Diagnostics Point to GitHub's Side

What makes this discussion particularly insightful is the comprehensive diagnostic work undertaken by shurikovyy. They leveraged multiple data sources to rule out local issues:

  • GitHub API View: Repeatedly queried GitHub API endpoints (/repos/.../actions/runs, /repos/.../actions/runs/{RUN_ID}/jobs, /repos/.../actions/runners) from the runner host. These consistently showed the run as queued, the job as unassigned (runner_id=0), and the runner as online and busy=false.
  • Runner Internal Logs: Examination of _diag/Runner_*.log and _diag/Worker_*.log confirmed the runner received no job request during the queued period until the manual trigger.
  • Host Networking Snapshot: A lightweight network watch confirmed stable connectivity to broker.actions.githubusercontent.com, successful DNS resolution, and no NIC errors or drops during the incident window.

The conclusion from these sources is compelling: "This looks like a delay or issue in job dispatch / broker messaging / scheduling on the GitHub Actions side (or something GitHub can see in backend logs), rather than a local runner outage."

Here are examples of the diagnostic commands used:

gh api /repos/IcoverLLC/AirFlow/actions/runs?per_page=1&status=queued
gh api /repos/IcoverLLC/AirFlow/actions/runs/21863545638/jobs
gh api /repos/IcoverLLC/AirFlow/actions/runners
curl -I https://broker.actions.githubusercontent.com/
Diagram illustrating a job stuck in a scheduling bottleneck before reaching an idle self-hosted runner.
Diagram illustrating a job stuck in a scheduling bottleneck before reaching an idle self-hosted runner.

Impact on Development Efficiency

Such intermittent and unpredictable delays directly undermine development efficiency. Developers rely on swift feedback from CI/CD pipelines. When a pipeline unexpectedly stalls, it leads to:

  • Wasted Developer Time: Developers are left waiting, context-switching, or manually intervening, pulling them away from core coding tasks.
  • Delayed Deployments: Critical updates or bug fixes can be held up, impacting release cycles and time-to-market.
  • Reduced Trust in Automation: Intermittent failures erode confidence in the automation system, potentially leading to less reliance on CI/CD or more manual checks.

This scenario highlights a crucial area for improvement in GitHub Actions' scheduling robustness, directly affecting overall developer productivity and the smooth flow of work.

Seeking Community and GitHub Insights

Shurikovyy's post is a plea for answers: Are there known issues? What specific aspects of the broker-based architecture could cause this? What further diagnostics are recommended? Unfortunately, the initial response from GitHub was a generic "Product Feedback Has Been Submitted" message, offering no immediate solution or specific guidance.

This leaves the community to collaborate. If you've experienced similar issues with self-hosted runners and queued workflows, sharing your insights, workarounds, or additional diagnostic steps can be invaluable. Improving the reliability of GitHub Actions is key to enhancing development efficiency for everyone.