Skip to content
GitLab
Projects Groups Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
    • Contribute to GitLab
  • Sign in
  • P pypocketfft
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributors
    • Graph
    • Compare
  • Issues 4
    • Issues 4
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 2
    • Merge requests 2
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Monitor
    • Monitor
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • Martin Reinecke
  • pypocketfft
  • Merge requests
  • !40

Threadpool fixes

  • Review changes

  • Download
  • Email patches
  • Plain diff
Merged Peter Bell requested to merge threadpool-fixes into master Sep 04, 2020
  • Overview 2
  • Commits 2
  • Pipelines 0
  • Changes 1

Fixes #14 (closed)

I'm a bit disappointed that std::hardware_destructive_interference_size isn't supported properly but it seems none of the major standard libraries implement it, so there's no point even trying.

The deadlocks you were seeing might be related to the race condition I tried to fix here. There is a window where the worker threads might have all checked the shared work queue and found nothing to do but the producer thread is just about to push a work item onto the queue. If the workers go to sleep in that window, they would never check the queue and the thread pool deadlocks. The fix there obviously wasn't good enough so I've added an extra atomic variable to track this situation and guaruntee the workers won't go to sleep.

I think travis is more vulnerable to this race because they're running on VMs with only 2 cores. The more threads, the less likely it should be that all the workers are in this in-between stage. I've now had quite a few travis passes in a row with this addition, so I'm hopeful that it's really fixed this time.

Assignee
Assign to
Reviewers
Request review from
Time tracking
Source branch: threadpool-fixes