Tests & Documentation: The Librarian & The Auditor ๐๐ต๏ธโโ๏ธ โ
Writing tests and documentation is like "eating your vegetables." Everyone knows they should do it, but few enjoy it. Good news: AI loves vegetables.
The Auditor: Generating Tests ๐งช โ
Writing test cases is repetitive. AI excels at repetition.
1. Generating Unit Tests (Pytest) โ
Scenario: You have a function calculate_discount(price, is_member).
Prompt:
"Write
pytestunit tests for this Python function. Do NOT use classes or complex fixtures. Just use simpledef test_...():functions. Include cases for:
- Normal price, member.
- Normal price, non-member.
- Zero price."
2. The "Simple Check" (Manual Assertions) ๐ข โ
If pytest feels too complex, ask AI for simple assert statements. This is great for beginners.
Prompt:
"I wrote a function
add(a, b). Write 3 simpleassertstatements I can put at the bottom of my file to test it when I runpython script.py."
Result:
if __name__ == "__main__":
assert add(2, 2) == 4, "2+2 should be 4"
assert add(0, 5) == 5, "0+5 should be 5"
print("All tests passed!")3. The "Pure Function" Trick (Logic vs Input) ๐ง โ
Beginners often mix input() inside their functions, which makes them hard to test.
Bad Code:
def get_name():
name = input("Enter name: ") # Hard to test!
return name.upper()Prompt:
"Refactor this function so I can test it without running
input(). Separate the logic from the user input."
Result:
# Easy to test!
def format_name(name):
return name.upper()
# Handle input separately
user_input = input("Enter name: ")
print(format_name(user_input))4. Interpreting Test Failures ๐ด โ
Pytest output can be scary. Ask AI to read it.
Prompt:
"Explain this pytest failure error. What did I expect and what did I get?
E assert 10 == 20E + where 10 = add(5, 5)"
Result: "The test expected the result to be 20, but your add(5, 5) function returned 10. Check your test logic."
5. TDD (Test Driven Development) Workflow โ
In TDD, you write tests before the code. AI can help you set this up.
Prompt:
"I want to write a function
is_palindrome(text)that returns True if the text reads the same forwards and backwards. First, write a set of failing tests for this function covering punctuation and case sensitivity."
Result: AI writes tests that fail (RED). Then you write the code to pass them (GREEN).
6. Generating Test Data (Faker) โ
Need a CSV with 1,000 fake users? Don't type them.
Prompt:
"Generate a Python script to create a CSV file named
users.csvwith 100 rows. Columns: id, name, email, signup_date. Use thefakerlibrary."
7. The "Edge Case Hunter" (The Pessimist) ๐ฉ๏ธ โ
You often test the "Happy Path" (valid input). AI is great at thinking of the "Unhappy Path".
Prompt:
"I have a function
divide(a, b). What are 5 specific inputs that might break this or cause weird errors? Write a test case for each."
Result: AI suggests divide(10, 0), divide("ten", 2), divide(None, 5).
The Librarian: Writing Documentation ๐ โ
Code without docs is a "mystery box."
1. Docstrings (Google Style) โ
Prompt:
"Add Google-style docstrings to this function. Include type hints, args, returns, and raises."
Result:
def connect_db(url: str) -> bool:
"""Connects to the database.
Args:
url (str): The connection string.
Returns:
bool: True if connection successful.
"""
...2. The Translator (Improving Error Messages) ๐ฃ๏ธ โ
Generic errors like "Invalid Input" are frustrating.
Prompt:
"Rewrite these error messages to be more helpful to the user. Explain why it failed."
Original: raise ValueError("Error")AI Suggestion: raise ValueError("Age cannot be negative. You provided: -5")
3. The README โ
A good README sells your project.
Prompt:
"Write a
README.mdfor a Python web scraper project called 'SuperScraper'. Sections:
- Title & Description (Scrapes book prices).
- Installation:
pip install -r requirements.txt.- Usage example:
python main.py --url ....- License: MIT."
4. The "Commentator" (Explaining Logic) ๐ฌ โ
Docstrings explain what a function does. Comments explain how.
Prompt:
"Add comments to this complex logic explaining what each line does. Don't state the obvious (like 'increment i'), explain the intent."
Result:
# We use a set here to remove duplicate email addresses instantly
unique_emails = set(email_list)
# Sort them alphabetically so the output is consistent for the user
sorted_emails = sorted(unique_emails)5. The "Doctest" (Tests inside Docs) ๐ โ
Python has a cool feature where you write tests inside your documentation.
Prompt:
"Add Python
docteststo this function docstring. Show an example of a successful run and a failed run."
Result:
def add(a, b):
"""
>>> add(2, 3)
5
>>> add(-1, 1)
0
"""
return a + bTask: The "Vegetable" Buffet ๐ฅ โ
Task
- Take a function you wrote previously.
- Ask AI to "Write comprehensive tests for this using pytest".
- Ask AI to "Write a docstring for this".
- Verify: Do the tests actually pass? Does the docstring match what the code does?
Bonus: Generating Tutorials ๐ โ
You can ask AI to teach you your own code.
Prompt:
"I pasted my script below. Write a step-by-step tutorial for a complete beginner explaining how this code works. Use analogies."
Why?: This is the best way to solidify your understanding. If the AI explains it and you think "Wait, that's not what I meant," you found a bug!
Warning: The "Blind Spot" ๐ โ
AI assumes your code works. If your function has a bug, the AI might write a test that expects the bug! Rule: Always review the test logic. Ideally, write the test before the code (TDD), or at least understand what "Success" looks like.
Related Topics ๐ โ
- Defining Requirements: Know what to test before you write the test (User Stories).
- Python Functions: How to write the docstrings you are generating.