Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] - Multi-core scheduling encounters data races #45

Open
4 tasks done
bSchnepp opened this issue Dec 20, 2022 · 2 comments
Open
4 tasks done

[BUG] - Multi-core scheduling encounters data races #45

bSchnepp opened this issue Dec 20, 2022 · 2 comments
Assignees
Labels
bug Something isn't working help wanted Extra attention is needed

Comments

@bSchnepp
Copy link
Owner

Issue Checklist

  • A related or similar issue is not already marked as open
  • The steps to reproduce have been tested, and do produce the issue described
  • If relevant, graphical issues have a screenshot presented as well. Text-only issues have the text and it's correct version listed within a Markdown code block section
  • The most recent commit on the master branch the bug is present in, with it's commit hash, is listed in this report

=====================================================
Bug Description
A side-effect of switching processes seems to encounter some problems with memory accesses: either the TLB isn't completely purged as expected, or it is possible to enter a race condition where a page table switches but hasn't been updated for the current context.

To Reproduce
Please list the steps to produce the bug below:

  1. Re-enable multi-core scheduling
  2. Attempt to run the OS
  3. A crash involving page table entries quickly occurs

Screenshots
If relevant, please provide screenshots here.

Expected behavior
Memory must always be in a consistent state

Additional information
Any additional information should be placed here.

@bSchnepp bSchnepp self-assigned this Dec 20, 2022
@bSchnepp bSchnepp added bug Something isn't working help wanted Extra attention is needed labels Dec 20, 2022
@bSchnepp
Copy link
Owner Author

Possibly related to #28

@bSchnepp
Copy link
Owner Author

This may be a good opportunity to just go out and implement MuQSS ourselves. It would be nice to prove some things about the algorithm along the way (a low priority process never creates priority inversion, and will eventually be run again.) At a high level, it's easier to understand than CFS, and should provide some nice latency bounds anyway.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

1 participant