Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactored the OpenMRS test suite #963

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

PiusKariuki
Copy link
Collaborator

Summary

Refactor the OpenMRS test suite

Fixes #

Details

Removing duplicates and storing interceptors in beforeEach blocks

AI Usage

Please disclose how you've used AI in this work (it's cool, we just want to
know!):

  • Code generation (copilot but not intellisense)
  • Learning or fact checking
  • Strategy / design
  • Optimisation / refactoring
  • Translation / spellchecking / doc gen
  • Other
  • I have not used AI

You can read more details in our
Responsible AI Policy

Review Checklist

Before merging, the reviewer should check the following items:

  • Does the PR do what it claims to do?
  • If this is a new adaptor, added the adaptor on marketing website ?
  • If this PR includes breaking changes, do we need to update any jobs in
    production? Is it safe to release?
  • Are there any unit tests?
  • Is there a changeset associated with this PR? Should there be? Note that
    dev only changes don't need a changeset.
  • Have you ticked a box under AI Usage?

Copy link
Collaborator

@josephjclark josephjclark left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @PiusKariuki, thank you for this.

I'm really sorry but I'm not sold on some of the changes. As a rule I would prefer tests to be verbose but readable rather than super efficient. I think some of these refactors make the tests harder to read and understand and extend. I also think we've lost some important test logic?

I don't want to spend a lot of time on this going forward. If you can quickly and easily keep some of the smaller changes - the state and data re-use for example - then do it. Otherwise I think I have to close this down. No more than an hour of dev from this point please.

@@ -66,65 +66,43 @@ describe('execute', () => {
});

describe('http', () => {
it('should GET with a query', async () => {
beforeEach(()=> {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hmm, I think there are two problems with this structure 🤔

  1. The interceptor is quite far removed from the test. Like, physically. When I'm looking at the test, it's hard to understand how the mock endpoint will behave.
  2. Each of these interceptors only triggers once. So they're literally a 1;1 binding to the test that uses them. And again, they're quite far removed. You could use .persist() to make the interceptor permanent (until we tear it down). But that's only helpful if we have multiple tests using the same endpoint.

So I'm really sorry but I'm not convinced that this is a positive change.

If I was to radically think about how to re-do these unit tests (and I'm really not sure I want to invest time in this), I would probably build a mock server and add an API to add data to it. So rather than mocking out very specific endpoints, I could just add or remove data to the server to get the right behaviour out of it. But I do think that's way too much work for this adaptor for now.


it('should auto-fetch patients with a limit', async () => {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Have we dropped this test entirely? Why? It's testing some quite important behaviour

query: { q: 'Sarah', limit: 1 },
baseUrl: state.configuration.instanceUrl,
});
.reply(200, { results: testData.patientResults }, { ...jsonHeaders });
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do approve of re-using the same test data - that makes the tests much more concise and readable

{ ...jsonHeaders }
);
// Define state only once
const state = { configuration }
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep - we could probably do this once at the top of the file couldn't we? Because actually, despite what this comment says, by my count we define state 14 times 😅

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah you're right

});

it('should not auto-fetch if the user sets startIdex', async () => {
testServer
.intercept({
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm concerned we've lost some important test logic here too

@PiusKariuki
Copy link
Collaborator Author

Hi @josephjclark I totally understand this let me wrap this up quickly. I will revert the commit and just keep the state and data re-use logic.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: In review
Development

Successfully merging this pull request may close these issues.

2 participants