The PM was yelling in Slack. Devs were gaslighting me like it was an Olympic sport. The “bug” was user error. Again.
And in the eye of this digital hurricane? Some fresh-faced newbie asks:
“Hey, what kind of questions do they ask in a manual testing interview?”
Kid, let me tell you the real question:
Why are we still asking these questions in 2025?
But fine. You want answers? Let me give you the cheat codes before you end up like me: half-caffeinated, screaming at TestRail, wondering if life was better back in the HP ALM days (it wasn’t).
Manual testing is not dead.
It’s just been renamed 14 times and now has a subscription fee.
You think AI’s gonna replace manual testers? Cute. Until AI starts reading between the lines of a half-baked product spec written during someone’s third Red Bull bender, we’re safe.
You’re probably roasting certification-style Q&As there. That’s a great time to link to ISTQB:
“If you’re still memorizing ISTQB glossary terms like it’s 2012, here’s their official syllabus — good luck surviving an actual bug triage.”
Now, let’s talk shop.
Let me guess—you’ve been memorizing definitions.
Bad idea.
Just say:
“Severity is about how broken it is. Priority is about how soon anyone will care enough to fix it.”
Then drop this gem:
“A typo on the homepage is low severity, but high priority. A crash in an obscure admin screen? High severity, low priority—until a VP touches it.”
Bonus Points: Reference that time the logo was misspelled and marketing had a meltdown.
If your answer involves any of the following:
Step 1: Click Login
Step 2: Enter username
…then congratulations, you’re writing recipes, not test cases.
A good test case should:
Assume the dev will mess up
Assume the user will definitely mess up
Survive a requirements pivot
You need intent, not instructions. Like:
“Verify that the login fails gracefully when credentials are correct but account status is ‘deactivated’.”
Otherwise, your test case is as useful as a bicycle in a DevOps meeting.
laughs in real life experience
Say:
“First, I ask dumb questions until I get smart answers. Then I test what I think they meant. And I always document what I assumed—because guess who they blame when it breaks?”
They’re the same thing.
But also different.
And yes, we’re still pretending that matters.
Answer like a tired Jedi:
“Smoke: Did the build survive deployment? Sanity: Did the main functions survive the last change? Same cigarette, different puff.”
Yes, but why?
Nobody talks like this outside interviews.
Say:
“Sure. It’s the Software Testing Life Cycle: requirements, planning, design, execution, defect reporting, closure. I follow it in spirit—not ceremony. Agile ruined our attention spans.”
Sites like Ministry of Testing give decent interview prep, but they don’t warn you about the dev who merges broken code on a Friday.
Easy. You send them a Loom video of the bug. With dramatic background music. Then you put it in the demo for the client.
But in the interview? Say this:
“I double-check the repro steps, gather evidence, and try to align on the user impact. If needed, I escalate diplomatically—unless it’s release day, then I escalate aggressively.”
You definitely know this.
Just don’t say “retesting the app after changes.”
That’s like saying a fire drill is “retesting the building.”
Say:
“Regression ensures the code you touched didn’t break 3 other modules you forgot existed. It’s the QA equivalent of checking if your ex is still stalking your Instagram after you moved on.”
Welcome to the triage matrix, where dreams go to die.
Say:
“I prioritize based on risk, business impact, and likelihood of failure. Login > Forgot Password > Export to CSV. Nobody cares about Export to CSV.”
But if the client is an accountant?
Flip it.
Your last line of defense. Your secret weapon. Your “oh crap, we missed this edge case” fix.
Say:
“Exploratory testing is structured chaos. I follow a charter, observe system behavior, and take notes like a crime scene investigator. It’s where the bugs live.”
Bonus: mention Session-Based Testing. Sounds fancy. Isn’t.
There never is.
Say:
“Then I test smart. I hit critical paths, recent changes, and areas with bad karma—i.e., the modules that break every sprint. I inform stakeholders about the risks I couldn’t cover.”
And always add:
“You can ship fast. Or safe. Pick one.”
“Manual testers don’t need technical skills.”
Cool, then why am I writing SQL queries at midnight?
“You can automate everything.”
Sure, once the product stops changing. Which is never.
“Exploratory testing isn’t real testing.”
Tell that to the guy who shipped a broken logout button because automation didn’t click the ‘X’ fast enough.
When deadlines are tight, UI is unstable, and you’re testing workflows with 4 popups, 2 OTPs, and one unpredictable third-party service?
Manual testing is king.
You can’t automate what hasn’t even been designed yet. Or worse—what keeps changing every sprint like it’s on caffeine and feelings.
Jira + XRay. TestRail. Zephyr. ALM (RIP).
They all promise structure. You end up with checklists nobody updates, dashboards nobody trusts, and reports for managers who never read them.
Just give me Notion and some brain cells.
Why are we still hiring “manual testers” like it’s 2012?
Why not just call them Product Explorers or Bug Detectives and give them the respect (and salary) they deserve?
Or do we wait until ChatGPT-12 replaces us all with “AI-powered QA synergies” and our jobs are renamed to “Prompt-Engineer-Level-Intern”?
Your move, internet.
Designed by ScriptNG
2 Responses