Recruiting Without Assignments

How I conduct technical interviews to assess candidates

A coworker once joked: “They made me design a reversed binary tree during the interview, but all I do day-to-day is write color codes in CSS to give a button five different shades of grey.”

Too often, assignments feel like a ceremony to join a select group rather than a real measure of fit. That’s why, as a technical interviewer, I’ve developed an approach that helps me understand not just what candidates know, but how they’ll function within a team. Let me explain how I conduct these conversations, and why.

Finding Common Ground

I start by identifying shared experiences with the candidate. For example, we might both have experience with Terraform or cloud environments. As candidates introduce themselves, I listen for overlaps in past projects or challenges. These aren’t just conversation starters, they’re calibration points that help me assess technical depth from a place of mutual understanding.

Layer by Layer Assessment

Once I establish this baseline, I dig deeper. I move the conversation from theoretical knowledge to practical experience, then to real-world challenges. The goal is to understand the boundary of a candidate’s experience and observe how they respond when they reach it.

Observing Behavioral Patterns

Beyond technical answers, I’m evaluating how candidates handle uncertainty and interact in a team setting. Key observations include:

  • Handling gaps in knowledge: Do they admit what they don’t know or try to evade questions?
  • Communication style: Do they ask clarifying questions and engage in dialogue, or jump to conclusions?
  • Depth of experience: Can they discuss real problems they’ve solved, including mistakes?
  • Engagement and Grit: Do they persist in understanding complex topics?

Patterns matter more than single behaviors. Anxiety may cause lengthy responses, but repeated evasion when a candidate is pushed beyond their knowledge often signals bullshitting and is a concern for collaboration and accountability. During the conversation we note these patterns and look for consistency across the interview.

Why This Matters

Traditional technical assessments or coding tests show only a slice of a candidate’s abilities. They rarely reveal how someone thinks, communicates, or fits within a team. Conversational interviews provide insight into problem-solving, communication style, and cultural fit.

For example, a candidate with strong cloud expertise may excel technically but struggle to collaborate if their communication style inhibits honest feedback or accountability. Identifying such patterns allows for better long-term hiring decisions, as technical skill alone is not enough for team success.

When candidates do not directly answer questions, this could stem from anxiety, communication style, or knowledge gaps. During the interview, I use different question types and look for consistency across the interview. No single behavior is disqualifying. I’m looking for patterns that persist even after the candidate has warmed up. Anxiety or ADHD can cause lengthy responses, but when this behavior for example only appears when the candidate is at the limits of their knowledge it raises a red flag. Everyone occasionally stretches their knowledge in interviews and bluffing is common. What concerns me is when candidates persistently avoid acknowledging knowledge limits, even when it is explicitly normalized to not know everything and offer multiple opportunities to pivot. The ability to acknowledge you do not know in itself is important. In my experience, people who bullshit often struggle with accountability once hired.

I’ve found that this layered, conversation-based approach gives me a much clearer picture of how a candidate will actually perform in the role. It’s harder to measure, yes, but it’s also much more valuable for predicting success within a team.

These observations help me form a holistic view of the candidate, beyond just their technical skills.

Assignments

Assignments have their place, but I prefer not to use them. They often lead to over-preparation or attempts to game the system, which doesn’t reflect true capabilities. Real-world work requires adapting, structuring problems, and communicating effectively—skills not easily measured by tests.

Life throws curveballs, and engineers need to reason from experience and communicate effectively under uncertainty. With AI tools like LLMs increasingly assisting technical tasks, evaluating these human skills becomes even more critical, as they are the true differentiators in modern engineering interviews.

LLMs can now be used to cheat on assignments. Do we need to remove unassisted assignments entirely and let candidates use them as they would in real life? I’m leaning towards this approach, what is left is reasoning, communication, and culture fit. Can we measure these in a conversation? I believe we can.

Favoritism

Interviewers must be aware of potential bias, especially when sharing experiences with a candidate. Multiple interviewers help ensure a balanced view. I write down my observations immediately after the interview and review them later, before sharing with the recruiter, to reduce bias. Synchronizing too early can unintentionally influence opinions.