-
-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test cases fail randomely because of missing elements #750
Comments
We see the same thing in our codebase. I don't think it's easy to help you though without more details (the exact error message, OS/memory/processor specs of your machines running the tests, configuration you're using for max-cases and max_wait_time, chrome and chromedriver versions). One suggestion from me is to enable Screenshots. For us, this has shown it's always the case that the next page hasn't loaded in time (the screenshots often show the browser was still displaying the previous page that had the button/link clicked on, hence the missing element or text). My thoughts have been this is probably not a Wallaby issue, given it's a fairly common challenge writing reliable browser tests at all. They are just slower, have more moving parts, and more prone to this kind of behaviour. My plans were to experiment with increasing our timeouts and reducing the number of parallel tests being run, to try and get to a point of more stability, and that might be the best option for you too. When running tests locally with I'd welcome other suggestions of how to debug and tune things. I am surprised that I have seen failures locally with the 7 second timeout and 1 test case running at a time. I am running with a 2015 MacBook Pro though, which has a 2.2 GHz Quad-Core Intel Core i7 processor and 16 GB 1600 MHz DDR3 RAM. I'll hopefully be upgrading my machine soon! |
I'm confused about what uses timeouts and what doesn't, and what's the best way to wait on some dom. This code was flakey for us: |> visit("/users/sign_in")
|> fill_in(text_field("user[email]"), with: user.email)
|> fill_in(text_field("user[password]"), with: user.password)
|> click(button("Sign In")) When it failed, Wallaby would raise an error from Changing the code to this makes the test not flakey: |> visit("/users/sign_in")
|> find(button("Sign In"), fn _ -> nil end)
|> fill_in(text_field("user[email]"), with: user.email)
|> fill_in(text_field("user[password]"), with: user.password)
|> click(button("Sign In")) Using |
Not because of flakiness, but to wait on external hardware to do something that should cause the UI to update, I just wrote this little function: @doc """
In a Wallaby test, wait up to the specified number of milliseconds for the
specified element to be present.
Note that Wallaby has its own timeout (apparently around 3 seconds) when
checking for an element, so that much delay is built in every time this
function is called or recurses.
"""
@spec await_element_for_ms(
parent :: Wallaby.Browser.parent(),
query :: Wallaby.Query.t(),
ms :: integer()
) :: Wallaby.Browser.parent()
def await_element_for_ms(parent, _selector, ms) when is_integer(ms) and ms <= 0 do
# Continue test
parent
end
def await_element_for_ms(parent, query, ms) when is_integer(ms) do
started_at = System.monotonic_time(:millisecond)
# This appears to take about 3 seconds
present? = Wallaby.Browser.has?(parent, query)
if present? do
# Continue test
parent
else
ended_at = System.monotonic_time(:millisecond)
elapsed = ended_at - started_at
await_element_for_ms(parent, query, ms - elapsed)
end
end Maybe that would be useful for y'all? If the maintainers would like to put this into Wallaby, feel free to use or adapt this code. |
Elixir and Erlang/OTP versions
Elixir: 1.14.3
Erlang: 25.3
Operating system
Mac
Browser
Chrome
Driver
Chromedriver
Correct Configuration
Current behavior
There are 2000 + test cases we are using wallaby for them, we are it to test live views. Whenever we run test cases, random test cases get failed because of missing elements or elements count don't match. Some times they get passed locally but get failed on pipeline (github actions), there we see same behaviour (randome failing)
Expected behavior
Expected behaviour is we need consistency, if test case has wrong implementation then it must get failed every time and if test cases has correct implementation then it gets passed each time. We are facing inconsistency.
Test Code & HTML
This test case sometime fails at
|> assert_text("Order Canceled")
Demonstration Project
No response
The text was updated successfully, but these errors were encountered: