420: Test Database Woes
The Bike Shed - Un pódcast de thoughtbot - Martes
Categorías:
Joël shares his recent project challenge with Tailwind CSS, where classes weren't generating as expected due to the dynamic nature of Tailwind's CSS generation and pruning. Stephanie introduces a personal productivity tool, a "thinking cap," to signal her thought process during meetings, which also serves as a physical boundary to separate work from personal life. The conversation shifts to testing methodologies within Rails applications, leading to an exploration of testing philosophies, including developers' assumptions about database cleanliness and their impact on writing tests. Avdi’s classic post on how to use database cleaner RSpec change matcher Command/Query separation When not to use factories Why Factories? Transcript: STEPHANIE: Hello and welcome to another episode of The Bike Shed, a weekly podcast from your friends at thoughtbot about developing great software. I'm Stephanie Minn. JOËL: And I'm Joël Quenneville. And together, we're here to share a bit of what we've learned along the way. STEPHANIE: So, Joël, what's new in your world? JOËL: I'm working on a new project, and this is a project that uses Tailwind CSS for its styling. And I ran into a bit of an annoying problem with it just getting started, where I was making changes and adding classes. And they were not changing the things I thought they would change in the UI. And so, I looked up the class in the documentation, and then I realized, oh, we're on an older version of the Tailwind Rails gem. So, maybe we're using...like, I'm looking at the most recent docs for Tailwind, but it's not relevant for the version I'm using. Turned out that was not the problem. Then I decided to use the Web Inspector and actually look at the element in my browser to see is it being overwritten somehow by something else? And the class is there in the element, but when I look at the CSS panel, it does not show up there at all or having any effects. And that got me scratching my head. And then, eventually, I figured it out, and it's a bit of a facepalm moment [laughs]. STEPHANIE: Oh, okay. JOËL: Because Tailwind has to, effectively, generate all of these, and it will sort of generate and prune the things you don't need and all of that. They're not all, like, statically present. And so, if I was using a class that no one else in the app had used yet, it hadn't gotten generated. And so, it's just not there. There's a class on the element, but there's no CSS definition tied to it, so the class does nothing. What you need to do is there's a rake task or some sort of task that you can run that will generate things. There's also, I believe, a watcher that you can run, some sort of, like, server that will auto-generate these for you in dev mode. I did not have that set up. So, I was not seeing that new class have any effect. Once I ran the task to generate things, sure enough, it worked. And Tailwind works exactly how the docs say they do. But that was a couple of hours of my life that I'm not getting back. STEPHANIE: Yeah, that's rough. Sorry to hear. I've also definitely gone down that route of like, oh, it's not in the docs. The docs are wrong. Like, do they even know what they're talking about? I'm going to fix this for everyone. And similarly have been humbled by a facepalm solution when I'm like, oh, did I yarn [laughs]? No, I didn't [laughs]. JOËL: Uh-huh. I'm curious, for you, when you have sort of moments where it's like the library is not behaving the way you think it is, is your default to blame yourself, or is it to blame the library? STEPHANIE: [laughs]. Oh, good question. JOËL: And the follow-up to that is, are you generally correct? STEPHANIE: Yeah. Yep, yep, yep. Hmm, I will say I externalize the blame, but I will try to at least do, like, the basic troubleshooting steps of restarting my server [laughter], and then if...that's as far as I'll go. And then, I'll be like, oh, like, something must be wrong, you know, with this library, and I turn to Google. And if I'm not finding any fruitful results, again, you know, one path could be, oh, maybe I'm not Googling correctly, but the other path could be, maybe I've discovered something that no one else has before. But to your follow-up question, I'm almost, like, always wrong [laughter]. I'm still waiting for the day when I, like, discover something that is an actual real problem, and I can go and open an issue [chuckles] and, hopefully, be validated by the library author. JOËL: I think part of what I heard is that your debugging strategy is basic, but it's not as basic as Joël's because you remember to restart the server [chuckles]. STEPHANIE: We all have our days [laughter]. JOËL: Next time. So, Stephanie, what is new in your world? STEPHANIE: I'm very excited to share this with you. And I recognize that this is an audio medium, so I will also describe the thing I'm about to show you [laughs]. JOËL: Oh, this is an object. STEPHANIE: It is an object. I got a hat [laughs]. JOËL: Okay. STEPHANIE: I'm going to put it on now. It's a cap that says "Thinking" on it [laughs] in, like, you know, fun sans serif font with a little bit of edge because the thinking is kind of slanted. So, it is designy, if you will. It's my thinking cap. And I've been wearing it at work all week, and I love it. As a person who, in meetings and, you know, when I talk to people, I have to process before I respond a lot of the time, but that has been interpreted as, you know, maybe me not having anything to say or, you know, people aren't sure if I'm, you know, still thinking or if it's time to move on. And sometimes I [chuckles], you know, take a long time. My brain is just spinning. I think another funny hat design would be, like, the beach ball, macOS beach ball. JOËL: That would be hilarious. STEPHANIE: Yeah. Maybe I need to, like, stitch that on the back of this thinking cap. Anyway, I've been wearing it at work in meetings. And then, when I'm just silently processing, I'll just point to my hat and signal to everyone what's [laughs] going on. And it's also been really great for the end of my work day because then I take off the hat, and because I've taken it off, that's, like, my signal, you know, I have this physical totem that, like, now I'm done thinking about work, and that has been working. JOËL: Oh, I love that. STEPHANIE: Yeah, that's been working surprisingly well to kind of create a bit more of a boundary to separate work thoughts and life thoughts. JOËL: Because you are working from home and so that boundary between professional life and personal life can get a little bit blurry. STEPHANIE: Yeah. I will say I take it off and throw it on the floor kind of dramatically [laughter] at the end of my work day. So, that's what's new. It had a positive impact on my work-life balance. And yeah, if anyone else has the problem of people being confused about whether you're still thinking or not, recommend looking into a physical thinking cap. JOËL: So, you are speaking at RailsConf this spring in Detroit. Do you plan to bring the thinking cap to the conference? STEPHANIE: Oh yeah, absolutely. That's a great idea. If anyone else is going to RailsConf, find me in my thinking cap [laughs]. JOËL: So, this is how people can recognize Bikeshed co-host Stephanie Minn. See someone walking around with a thinking cap. STEPHANIE: Ooh. thinkingbot? JOËL: Ooh. STEPHANIE: Have I just designed new thoughtbot swag [laughter]? We'll see if this catches on. JOËL: So, we were talking recently, and you'd mentioned that you were facing some really interesting dilemmas when it came to writing tests and particularly how tests interact with your test database. STEPHANIE: Yeah. So, I recently, a few weeks ago, joined a new client project and, you know, one of the first things that I do is start to run those tests [laughs] in their codebase to get a sense of what's what. And I noticed that they were taking quite a long time to get set up before I even saw any progress in terms of successes or failures. So, I was kind of curious what was going on before the examples were even run. And when I tailed the logs for the tests, I noticed that every time that you were running the test suite, it would truncate all of the tables in the test database. And that was a surprise to me because that's not a thing that I had really seen before. And so, basically, what happens is all of the data in the test database gets deleted using this truncation strategy. And this is one way of ensuring a clean slate when you run your tests. JOËL: Was this happening once at the beginning of the test suite or before every test? STEPHANIE: It was good that it was only running once before the test suite, but since, you know, in my local development, I'm running, like, a file at a time or sometimes even just targeting a specific line, this would happen on every run in that situation and was just adding a little bit of extra time to that feedback loop in terms of just making sure your code was working if that's part of your workflow. JOËL: Do you know what version of Rails this project was in? Because I know this was popular in some older versions of Rails as a strategy. STEPHANIE: Yeah. So, it is Rails 7 now, recently upgraded to Rails 7. It was on Rails 6 for a little while. JOËL: Very nice. I want to say that truncation is generally not necessary as of Rails...I forget if it's 5 or 6. But back in the day, specifically for what are now called system tests, the sort of, like, Capybara UI-driven browser tests, you had, effectively, like, two threads that were trying to access the database. And so, you couldn't have your test data wrapped in a transaction the way you would for unit tests because then the UI thread would not have access to the data that had been created in a transaction just for the test thread. And so, people would use tools like Database Cleaner to use a truncation strategy to clear out everything between tests to allow a sort of clean slate for these UI-driven feature specs. And then, I want to say it's Rails 5, it may have been Rails 6 when system tests were added. And one of the big things there was that they now could, like, share data in a transaction instead of having to do two separate threads and one didn't have access to it. And all of a sudden, now you could go back to transactional fixtures the way that you could with unit tests and really take advantage of something that's really nice and built into Rails. STEPHANIE: That's cool. I didn't know that about system tests and that kind of shift happening. I do think that, in this case, it was one of those situations where, in the past, the database truncation, in this case, particular using the Database Cleaner gem was necessary, and that just never got reassessed as the years went by. JOËL: That's one of the classic things, right? When you upgrade a Rails app over multiple versions, and sometimes you sort of get a new feature that comes in for free with the new version, and you might not be aware of it. And some of the patterns in the app just kind of keep going. And you don't realize, hey, this part of the app could actually be modernized. STEPHANIE: So, another interesting thing about this testing situation is that I learned that, you know, if you ran these tests, you would experience this truncation strategy. But the engineering team had also kind of played around with having a different test setup that didn't clean the database at all unless you opted into it. JOËL: So, your test database would just...each test would just keep writing to the database, but they're not wrapped in transactions. Or they are wrapped in transactions, but you may or may not have some additional data. STEPHANIE: The latter. So, I think they were also using the transaction strategy there. But, you know, there are some reasons that you would still have some data persisted across test runs. I had actually learned that the use transactional fixtures config for RSpec doesn't roll back any data that might have been created in a before context hook. JOËL: Yep, or a before all. Yeah, the transaction wraps the actual example, but not anything that happens outside of it. STEPHANIE: Yeah, I thought that was an interesting little gotcha. So, you know, now we had these, like, two different ways to run tests. And I was chatting with a client developer about how that came to be. And we then got into an interesting conversation about, like, whether or not we each expect a clean database in the first place when we write our tests or when we run our tests, and that was an area that we disagreed. And that was cool because I had not really, like, thought about like, oh, how did I even arrive at this assumption that my database would always be clean? I think it was just, you know, from experience having only worked in Rails apps of a certain age that really got onto the Database [laughs] Cleaner train. But it was interesting because I think that is a really big assumption to make that shapes how you then approach writing tests. JOËL: And there's kind of a couple of variations on that. I think the sort of base camp approach of writing Rails with fixtures, you just sort of have, for the most part, an existing set of data that's there that you maybe layer on a few extra things on. But there's base level; you just expect a bunch of data to exist in your test database. So, it's almost going off the opposite assumption, where you can always assume that certain things are already there. Then there's the other extreme of, like, you always assume that it's empty. And it sounds like maybe there's a position in the middle of, like, you never know. There may be something. There may not be something, you know, spin the wheel. STEPHANIE: Yeah. I guess I was surprised that it, you know, that was just a question that I never really asked myself prior to this conversation, but it could feel like different testing philosophies. But yeah, I was very interested in this, you know, kind of opinion that was a little bit different from mine about if you assume that your database, your test database, is not clean, that kind of perhaps nudges you in the direction of writing tests that are less coupled to the database if they don't need to be. JOËL: What does coupling to the database mean in this situation? STEPHANIE: So, I'm thinking about Rails tests that might be asserting on a change in database behavior, so the change matcher in RSpec is one that I see maybe sometimes used when it doesn't need to be used. And we're expecting, like, account to have changed the count of the number of records on it for a model have changed after doing some work, right? JOËL: And the change matcher from RSpec is one that allows you to not care whether there are existing records or not. It sort of insulates you from that. STEPHANIE: That's true. Though I guess I was thinking almost like, what if there was some return value to assert on instead? And would that kind of help you separate some side effects from methods that might be doing too much? And kind of when I start to see tests that have both or are asserting on something being returned, and then also something happening, that's one way of, like, figuring out what kind of coupling is going on inside this test. JOËL: It's the classic command-query separation principle from object-oriented design. STEPHANIE: I think another one that came to mind, another example, especially when you're talking about system tests, is when you might be using Capybara and you end up...maybe you're going through a flow that creates a record. But from the user perspective, they don't actually know what's going on at the database level. But you could assert that something was created, right? But it might be more realistic at that level of abstraction to be asserting some kind of visual element that had happened as a result of the flow that you're testing. JOËL: Yeah. I would, in fact, go so far as to say that asserting on the state of your database in a system test is an anti-pattern. System tests are sort of, by design, meant to be all about user behavior trying to mimic the experience of a user. And a user of a website is not going to be able to...you hope they're not able to SSH into [chuckles] your database and check the records that have been created. If they can, you've got another problem. STEPHANIE: I wonder if you could take this idea to the extreme, though. And do you think there is a world where you don't really test database-level concerns at all if you kind of believe this idea that it doesn't really matter what the state of it should be? JOËL: I guess there's a few different things on, like, what it matters about the state of it because you are asserting on its state sort of indirectly in a sort of higher level integration test. You're asserting that you see certain things show up on the screen in a system test. And maybe you want to say, "I do certain tasks, and then I expect to see three items in an unordered list." Those three items probably come from the database, although, you know, you could have it where they come from an API or something like that. So, the database is an implementation level. But if you had random data in your database, you might, in some tests, have four items in the list, some tests have five. And that's just going to be a flaky test, and that's going to be incredibly painful. So, while you're not asserting on the database, having control over it during sort of test setup, I think, does impact the way you assert. STEPHANIE: Yeah, that makes sense. I was suddenly just thinking about, like, how that exercise can actually tell you perhaps, like, when it is important to, in your test setup, be persisting real records as opposed to how much you can get away with, like, not interacting with it because, like, you aren't testing at that integration level. JOËL: That brings up a good point because a lot of tests probably you might need models, but you might not need persisted models to interact with them, if you're testing a method on a model that just does things based off its internal state and not any of the ActiveRecord database queries, or if you have some other service or something that consumes a model that doesn't necessarily need to query. There's a classic blog post on the thoughtbot blog about when you should not reuse. There's a classic blog post on the thoughtbot blog about when not to use FactoryBot. And, you know, we are the makers of FactoryBot. It helps set up records in your database for testing. And people love to use it all the time. And we wrote an article about why, in many cases, you don't need to create something into the database. All you need is just something in memory, and that's going to be much faster than using FactoryBot because talking to the database is expensive. STEPHANIE: Yeah, and I think we can see that in the shift from even, like, fixtures to factories as well, where test data was only persisted as needed and as needed in individual tests, rather than seeding it and having all of those records your entire test run. And it's cool to see that continuing, you know, that idea further of like, okay, now we have this new, popular tool that reduce some of that. But also, in most cases, we still don't need...it's still too much. JOËL: And from a performance perspective, it's a bit of a see-saw in that fixtures are a lot faster because they get inserted once at the beginning of your test run. So, a SQL execution at the beginning of a test run and then every test after that is just doing its thing: maybe creating a record inside of a transaction, maybe not creating any records at all. And so, it can be a lot faster as opposed to using FactoryBot where you're creating records one at a time. Every create call in a test is a round trip to the database, and those are expensive. So, FactoryBot tests tend to be more expensive than those that rely on fixtures. But you have the advantage of more control over what data is present and sort of more locality because you can see what has been created at the test level. But then, if you decide, hey, this is a test where I can just create records in memory, that's probably the best of all worlds in that you don't need anything created ahead with fixtures. You also don't need anything to be inserted using FactoryBot because you don't even need the database for this test. STEPHANIE: I'm curious, is that the assumption that you start with, that you don't need a persisted object when you're writing a basic unit test? JOËL: I think I will as much as possible try not to need to persist and only if necessary use persist records. There are strategies with FactoryBot that will allow you to also, like, build stubbed or just build in memory. So, there's a few different variations that will, like, partially do things for you. But oftentimes, you can just new up an object, and that's what I will often start with. In many cases, I will already know what I'm trying to do. And so, I might not go through the steps of, oh, new up an object. Oh no, I'm getting a I can't do the thing I need to do. Now, I need to write to the database. So, if I'm testing, let's say, an ActiveRecord scope that's filtering down a series of records, I know that's a wrapper around a database query. I'm not going to start by newing up some records and then sort of accidentally discovering, oh yeah, it does write to the database because that was pretty clear to me from the beginning. STEPHANIE: Yeah. Like, you have your mental shortcuts that you do. I guess I asked that question because I wonder if that is a good heuristic to share with maybe developers who are trying to figure out, like, should they create persisted records or, you know, use just regular instance in memory or, I don't know, even [laughs] use, like, a double [laughs]? JOËL: Yeah, I've done that quite a bit as well. I would say maybe my heuristic is, is the method under test going to need to talk to the database? And, you know, I may or may not know that upfront because if I'm test driving, I'm writing the test first. So, sometimes, maybe I don't know, and I'll start with something in memory and then realize, oh, you know, I do need to talk to the database for this. And this is for unit tests, in particular. For something more like an integration test or a system test that might require data in the database, system tests almost always do. You're not interacting with instances in memory when you're writing a system test, right? You're saying, "Given the database state is this when I visit this URL and do these things, this page reacts in such and such a way." So, system tests always write to the database to start with. So, maybe that's my heuristic there. But for unit tests, maybe think a little bit about does your method actually need to talk to the database? And maybe even almost give yourself a challenge. Can I get away with not talking to the database here? STEPHANIE: Yeah, I like that because I've certainly seen a lot of unit tests that are integration tests in disguise [laughs]. JOËL: Isn't that the truth? So, we kind of opened up this conversation with the idea of there are different ways to manage your database in terms of, do you clean or not clean before a test run? Where did you end up on this particular project? STEPHANIE: So, I ended up with a currently open PR to remove the need to truncate the database on each run of the test suite and just stick with the transaction for each example strategy. And I do think that this will work for us as long as we decide we don't want to introduce something like fixtures, even though that is actually also a discussion that's still in the works. But I'm hoping with this change, like, right now, I can help people start running faster tests [chuckles]. And should we ever introduce fixtures down the line, then we can revisit that. But it's one of those things that I think we've been living with this for too long [laughs]. And no one ever questioned, like, "Oh, why are we doing this?" Or, you know, maybe that was a need, however many years ago, that just got overlooked. And as a person new to the project, I saw it, and now I'm doing something about it [laughs]. JOËL: I love that new person energy on a project and like, "Hey, we've got this config thing. Did you know that we didn't need this as of Rails 6?" And they're like, "Oh, I didn't even realize that." And then you add that, and it just moves you into the future a little bit. So, if I understand the proposed change, then you're removing the truncation strategy, but you're still going to be in a situation where you have a clean database before each test because you're wrapping tests in transactions, which I think is the default Rails behavior. STEPHANIE: Yeah, that's where we're at right now. So, yeah, I'm not sure, like, how things came to be this way, but it seemed obvious to me that we were kind of doing this whole extra step that wasn't really necessary, at least at this point in time. Because, at least to my knowledge [laughs], there's no data being seeded in any other place. JOËL: It's interesting, right? When you have a situation where this was sort of a very popular practice for a long time, a lot of guides mentioned that. And so, even though Rails has made changes that mean that this is no longer necessary, there's still a long tail of apps that will still have this that may be upgraded later, and then didn't drop this, or maybe even new apps that got created but didn't quite realize that the guide they were following was outdated, or that a best practice that was in their head was also outdated. And so, you have a lot of apps that will still have these sort of, like, relics of the past. And you're like, "Oh yeah, that's how we used to do things." STEPHANIE: So yeah, thanks, Joël, for going on this journey with me in terms of, you know, reassessing my assumptions about test databases. I'm wondering, like, if this is common, how other people, you know, approach what they expect from the test database, whether it be totally clean or have, you know, any required data for common flows and use cases of your system. But it does seem that little in between of, like, maybe it is using transactions to reset for each example, but then there's also some persistence that's happening somewhere else that could be a little tricky to manage. JOËL: On that note, shall we wrap up? STEPHANIE: Let's wrap up. Show notes for this episode can be found at bikeshed.fm. JOËL: This show has been produced and edited by Mandy Moore. STEPHANIE: If you enjoyed listening, one really easy way to support the show is to leave us a quick rating or even a review in iTunes. It really helps other folks find the show. JOËL: If you have any feedback for this or any of our other episodes, you can reach us @_bikeshed, or you can reach me @joelquen on Twitter. STEPHANIE: Or reach both of us at [email protected] via email. JOËL: Thanks so much for listening to The Bike Shed, and we'll see you next week. ALL: Byeeeeeeeeeeee!!!!!! AD: Did you know thoughtbot has a referral program? If you introduce us to someone looking for a design or development partner, we will compensate you if they decide to work with us. More info on our website at: tbot.io/referral. Or you can email us at: [email protected] with any questions.Support The Bike Shed