Skip to content

Tests & Documentation: The Librarian & The Auditor ๐Ÿ“š๐Ÿ•ต๏ธโ€โ™€๏ธ โ€‹

Writing tests and documentation is like "eating your vegetables." Everyone knows they should do it, but few enjoy it. Good news: AI loves vegetables.

The Auditor: Generating Tests ๐Ÿงช โ€‹

Writing test cases is repetitive. AI excels at repetition.

1. Generating Unit Tests (Pytest) โ€‹

Scenario: You have a function calculate_discount(price, is_member).

Prompt:

"Write pytest unit tests for this Python function. Do NOT use classes or complex fixtures. Just use simple def test_...(): functions. Include cases for:

  1. Normal price, member.
  2. Normal price, non-member.
  3. Zero price."

2. The "Simple Check" (Manual Assertions) ๐ŸŸข โ€‹

If pytest feels too complex, ask AI for simple assert statements. This is great for beginners.

Prompt:

"I wrote a function add(a, b). Write 3 simple assert statements I can put at the bottom of my file to test it when I run python script.py."

Result:

python
if __name__ == "__main__":
    assert add(2, 2) == 4, "2+2 should be 4"
    assert add(0, 5) == 5, "0+5 should be 5"
    print("All tests passed!")

3. The "Pure Function" Trick (Logic vs Input) ๐Ÿง  โ€‹

Beginners often mix input() inside their functions, which makes them hard to test.

Bad Code:

python
def get_name():
    name = input("Enter name: ") # Hard to test!
    return name.upper()

Prompt:

"Refactor this function so I can test it without running input(). Separate the logic from the user input."

Result:

python
# Easy to test!
def format_name(name):
    return name.upper()

# Handle input separately
user_input = input("Enter name: ")
print(format_name(user_input))

4. Interpreting Test Failures ๐Ÿ”ด โ€‹

Pytest output can be scary. Ask AI to read it.

Prompt:

"Explain this pytest failure error. What did I expect and what did I get? E assert 10 == 20E + where 10 = add(5, 5)"

Result: "The test expected the result to be 20, but your add(5, 5) function returned 10. Check your test logic."

5. TDD (Test Driven Development) Workflow โ€‹

In TDD, you write tests before the code. AI can help you set this up.

Prompt:

"I want to write a function is_palindrome(text) that returns True if the text reads the same forwards and backwards. First, write a set of failing tests for this function covering punctuation and case sensitivity."

Result: AI writes tests that fail (RED). Then you write the code to pass them (GREEN).

6. Generating Test Data (Faker) โ€‹

Need a CSV with 1,000 fake users? Don't type them.

Prompt:

"Generate a Python script to create a CSV file named users.csv with 100 rows. Columns: id, name, email, signup_date. Use the faker library."

7. The "Edge Case Hunter" (The Pessimist) ๐ŸŒฉ๏ธ โ€‹

You often test the "Happy Path" (valid input). AI is great at thinking of the "Unhappy Path".

Prompt:

"I have a function divide(a, b). What are 5 specific inputs that might break this or cause weird errors? Write a test case for each."

Result: AI suggests divide(10, 0), divide("ten", 2), divide(None, 5).

The Librarian: Writing Documentation ๐Ÿ“– โ€‹

Code without docs is a "mystery box."

1. Docstrings (Google Style) โ€‹

Prompt:

"Add Google-style docstrings to this function. Include type hints, args, returns, and raises."

Result:

python
def connect_db(url: str) -> bool:
    """Connects to the database.

    Args:
        url (str): The connection string.

    Returns:
        bool: True if connection successful.
    """
    ...

2. The Translator (Improving Error Messages) ๐Ÿ—ฃ๏ธ โ€‹

Generic errors like "Invalid Input" are frustrating.

Prompt:

"Rewrite these error messages to be more helpful to the user. Explain why it failed."

Original: raise ValueError("Error")AI Suggestion: raise ValueError("Age cannot be negative. You provided: -5")

3. The README โ€‹

A good README sells your project.

Prompt:

"Write a README.md for a Python web scraper project called 'SuperScraper'. Sections:

  • Title & Description (Scrapes book prices).
  • Installation: pip install -r requirements.txt.
  • Usage example: python main.py --url ....
  • License: MIT."

4. The "Commentator" (Explaining Logic) ๐Ÿ’ฌ โ€‹

Docstrings explain what a function does. Comments explain how.

Prompt:

"Add comments to this complex logic explaining what each line does. Don't state the obvious (like 'increment i'), explain the intent."

Result:

python
# We use a set here to remove duplicate email addresses instantly
unique_emails = set(email_list)

# Sort them alphabetically so the output is consistent for the user
sorted_emails = sorted(unique_emails)

5. The "Doctest" (Tests inside Docs) ๐Ÿ“„ โ€‹

Python has a cool feature where you write tests inside your documentation.

Prompt:

"Add Python doctests to this function docstring. Show an example of a successful run and a failed run."

Result:

python
def add(a, b):
    """
    >>> add(2, 3)
    5
    >>> add(-1, 1)
    0
    """
    return a + b

Task: The "Vegetable" Buffet ๐Ÿฅ— โ€‹

Task

  1. Take a function you wrote previously.
  2. Ask AI to "Write comprehensive tests for this using pytest".
  3. Ask AI to "Write a docstring for this".
  4. Verify: Do the tests actually pass? Does the docstring match what the code does?

Bonus: Generating Tutorials ๐ŸŽ“ โ€‹

You can ask AI to teach you your own code.

Prompt:

"I pasted my script below. Write a step-by-step tutorial for a complete beginner explaining how this code works. Use analogies."

Why?: This is the best way to solidify your understanding. If the AI explains it and you think "Wait, that's not what I meant," you found a bug!

Warning: The "Blind Spot" ๐Ÿ™ˆ โ€‹

AI assumes your code works. If your function has a bug, the AI might write a test that expects the bug! Rule: Always review the test logic. Ideally, write the test before the code (TDD), or at least understand what "Success" looks like.