Patterns & Best Practices
This guide teaches you how to write clean, idiomatic, professional Quill code. It is aimed at intermediate developers who already know the language basics and want to level up their craft. Each section covers a theme, shows both good and bad examples, and ends with a realistic, complete example you can adapt for your own projects.
1. Code Organization
How you structure your files matters far more than most people think. Good structure makes code easy to find, easy to change, and easy to hand off to a teammate. Bad structure turns every change into a scavenger hunt.
Small projects (1-3 files)
If your entire program fits in a few hundred lines, keep it simple. A single file is fine. When it grows past 300 lines, split it into two or three files by responsibility.
my-tool/
main.quill -- entry point, top-level logic
helpers.quill -- utility functions
Medium projects (4-10 files)
Group files by what they do. Keep one clear entry point.
my-app/
main.quill -- entry point
config.quill -- configuration and constants
db.quill -- database connection and queries
auth.quill -- authentication logic
utils.quill -- shared helpers
tests/
auth.test.quill -- tests for auth
db.test.quill -- tests for database
Large projects (10+ files)
Use directories to group related files. A web application might look like this:
my-web-app/
main.quill
config.quill
routes/
users.quill -- /api/users endpoints
posts.quill -- /api/posts endpoints
auth.quill -- /auth/login, /auth/signup
models/
User.quill -- User class and validation
Post.quill -- Post class and validation
utils/
hash.quill -- password hashing helpers
email.quill -- email sending helpers
validate.quill -- input validation helpers
tests/
routes/
users.test.quill
posts.test.quill
models/
User.test.quill
When to split into a new file
Split when a file does more than one distinct job. A good rule: if you can give the new file a clear, specific name (not "misc" or "stuff"), it deserves its own file.
Naming conventions
| What | Convention | Example |
|---|---|---|
| Variables | camelCase | userName, totalPrice, isActive |
| Functions | camelCase | calculateTotal, sendEmail, formatDate |
| Classes | PascalCase | User, HttpClient, ShoppingCart |
| Constants | UPPER_CASE | MAX_RETRIES, API_BASE_URL, DEFAULT_TIMEOUT |
| Files | kebab-case or camelCase | auth.quill, user-routes.quill |
-- Bad: inconsistent, unclear names
data is 100
to do_thing x:
give back x * data
-- Good: clear, consistent names
MAX_CONNECTIONS is 100
to scaleByLimit value:
give back value * MAX_CONNECTIONS
userAge is better than num. fetchOrders is better than getData.
Organizing imports
Group your imports at the top of every file, in this order: standard library, third-party packages, your own modules. Separate each group with a blank line.
-- Standard library (available by default, but be explicit for clarity)
from "crypto" use hash, randomBytes
-- Third-party packages
use "express" as app
from "pg" use Pool
-- Your own modules
from "./auth.quill" use login, register
from "./utils/validate.quill" use validateEmail
Keep functions short and focused
A function should do one thing. If you find yourself writing a function longer than 20-30 lines, look for opportunities to extract helper functions.
Bad: one giant function that does everything.
-- Bad: processUser does validation, transformation, saving, and emailing
to processUser input:
-- 50 lines of mixed validation, database calls, and email logic
...
Good: each step is its own function.
-- Good: each function has one clear job
to processUser input -> Result of User:
validated is validateUserInput(input)?
user is createUserRecord(validated)
saveToDatabase(user)?
sendWelcomeEmail(user.email)?
give back Success(user)
Use constants for magic numbers
-- Bad: what does 86400000 mean?
if now() - user.lastLogin is greater than 86400000:
forceReauth()
-- Good: self-documenting
ONE_DAY_MS is 86400000
if now() - user.lastLogin is greater than ONE_DAY_MS:
forceReauth()
2. Error Handling Patterns
Unhandled errors crash programs and frustrate users. Good error handling means anticipating what can go wrong and responding gracefully. Quill gives you two approaches: try/if it fails for exception-style handling, and Result types for explicit, composable error handling.
Always handle errors
Bad: ignoring errors entirely.
-- Bad: if the file doesn't exist, the program crashes
data is readFile("config.json")
config is parseJSON(data)
Good: wrapping risky operations and providing a fallback.
-- Good: handle the error, provide a fallback
try:
data is readFile("config.json")
config is parseJSON(data)
if it fails error:
say "Could not load config: {error}"
config is {port: 3000, debug: no}
When to use try/if it fails vs Result types
| Approach | Best for |
|---|---|
try / if it fails |
I/O operations (file reads, HTTP calls), top-level error boundaries, when you want to catch everything in a block |
Result types |
Business logic, function return values, when you need to chain multiple fallible operations, when errors are expected and meaningful |
Error propagation with Result types
Use Success() and Error() to make error handling explicit. Callers always know a function can fail.
to parseAge input as text -> Result of number:
try:
age is toNumber(input)
if age is less than 0:
give back Error("Age cannot be negative")
if age is greater than 150:
give back Error("Age is unrealistically high")
give back Success(age)
if it fails err:
give back Error("'{input}' is not a valid number")
The ? operator for chaining
When you call multiple functions that return Result, the ? operator keeps things clean. If the result is an Error, it returns immediately. If it is a Success, it unwraps the value.
Bad: verbose manual checking at every step.
-- Bad: repetitive error checking
to processOrder orderId:
orderResult is loadOrder(orderId)
match orderResult:
when Error msg:
give back Error(msg)
when Success order:
userResult is loadUser(order.userId)
match userResult:
when Error msg:
give back Error(msg)
when Success user:
give back Success(chargeUser(user, order.total))
Good: the ? operator does the same thing in three lines.
-- Good: ? propagates errors automatically
to processOrder orderId -> Result of Receipt:
order is loadOrder(orderId)?
user is loadUser(order.userId)?
give back Success(chargeUser(user, order.total))
Custom error messages
Always include enough context for someone to diagnose the problem without reading your source code.
-- Bad: useless error message
give back Error("failed")
-- Good: says what went wrong and why
give back Error("Failed to load user {id}: database connection timed out")
The try expression for fallbacks
When you just want a default value if something fails, use try ... otherwise as a one-liner.
-- Instead of a full try/if it fails block for a simple fallback:
name is try loadUser(42).name otherwise "Guest"
port is try toNumber(env("PORT")) otherwise 3000
Wrapping third-party errors
When calling external libraries, catch their errors and wrap them in your own error type with context. This makes debugging much easier.
to queryDatabase sql params -> Result of list:
try:
result is await db.query(sql, params)
give back Success(result.rows)
if it fails err:
give back Error("Database query failed: {err}\nSQL: {sql}")
Layered error handling
In a well-structured application, errors bubble up through layers. Each layer adds context.
-- Layer 1: Database (low-level)
to findUserById id -> Result of object:
try:
row is await db.query("SELECT * FROM users WHERE id = ?", [id])
if row is nothing:
give back Error("User {id} not found in database")
give back Success(row)
if it fails err:
give back Error("Database error looking up user {id}: {err}")
-- Layer 2: Service (business logic)
to getUserProfile id -> Result of object:
user is findUserById(id)?
give back Success({name: user.name, email: user.email})
-- Layer 3: Route handler (top-level)
match getUserProfile(requestedId):
when Success profile:
respond with profile
when Error msg:
say "[Error] {msg}"
respond with {error: "Something went wrong"} status 500
Full example: a file processor
This function reads a JSON file, validates its structure, and returns the parsed data. It handles missing files, invalid JSON, and unexpected structure.
to loadConfig path as text -> Result of object:
-- Step 1: check the file exists
if exists(path) is no:
give back Error("Config file not found: {path}")
-- Step 2: read the file
try:
raw is readFile(path)
if it fails err:
give back Error("Cannot read {path}: {err} (check permissions)")
-- Step 3: parse the JSON
try:
config is parseJSON(raw)
if it fails err:
give back Error("Invalid JSON in {path}: {err}")
-- Step 4: validate required fields
if config.port is nothing:
give back Error("Config missing required field: port")
if config.host is nothing:
config.host is "localhost" -- default is OK
give back Success(config)
-- Usage
result is loadConfig("app.json")
match result:
when Success config:
say "Server starting on {config.host}:{config.port}"
when Error msg:
say "Startup failed: {msg}"
exit(1)
3. Working with Data
Most programs revolve around transforming data: reading it, cleaning it, reshaping it, and writing it somewhere else. Quill has several features that make data work concise and readable.
Destructuring objects and lists
Pull out exactly the fields you need in one step.
-- Object destructuring
response is {status: 200, body: "OK", headers: {}}
{status, body} is response
say status -- 200
say body -- "OK"
-- List destructuring
coordinates are [40.7, -74.0, 10]
[lat, lng, altitude] is coordinates
say "Lat: {lat}, Lng: {lng}"
-- Destructuring in loops
users are [
{name: "Alice", role: "admin"},
{name: "Bob", role: "user"}
]
for each {name, role} in users:
say "{name} is an {role}"
Spread operator for merging and copying
-- Merge objects (later values win)
defaults is {theme: "light", lang: "en", pageSize: 20}
userPrefs is {theme: "dark", pageSize: 50}
settings is {...defaults, ...userPrefs}
say settings.theme -- "dark" (user preference wins)
say settings.lang -- "en" (default kept)
say settings.pageSize -- 50 (user preference wins)
-- Copy a list and add items
original are [1, 2, 3]
extended are [...original, 4, 5]
say extended -- [1, 2, 3, 4, 5]
-- Update one field without mutating
user is {name: "Alice", age: 30, active: yes}
updated is {...user, age: 31}
say user.age -- 30 (original unchanged)
say updated.age -- 31
Pipe operator chains for data transformation
Pipes let you read data transformations in the order they happen, from left to right.
Bad: deeply nested function calls.
-- Bad: reads inside-out, hard to follow
result is join(sort(unique(filter(names, to n: length(n) is greater than 2))), ", ")
Good: pipe each step clearly.
-- Good: reads top to bottom, each step is clear
result is names
| filter(to n: length(n) is greater than 2)
| unique
| sort
| join(", ")
Immutability with deepCopy()
Objects and lists are passed by reference in Quill. If you need a fully independent copy (so changes to the copy do not affect the original), use deepCopy().
original is {name: "Alice", scores: [90, 85]}
clone is deepCopy(original)
push(clone.scores, 100)
say original.scores -- [90, 85] (unchanged)
say clone.scores -- [90, 85, 100] (only the copy changed)
deepCopy(): Use it when you receive data from a function and plan to modify it, but the original must stay intact. For read-only access, a plain reference is faster and fine.
Full example: processing user records
Given a list of user records, filter out inactive users, transform their data, group by role, and sort each group by name.
-- Sample data
users are [
{name: "Alice", role: "admin", active: yes, score: 92},
{name: "Bob", role: "user", active: no, score: 45},
{name: "Carol", role: "user", active: yes, score: 88},
{name: "Dave", role: "admin", active: yes, score: 76},
{name: "Eve", role: "user", active: yes, score: 95},
{name: "Frank", role: "user", active: no, score: 30}
]
-- Step 1: keep only active users
activeUsers is users | filter(to u: u.active)
-- Step 2: transform into display objects
displayUsers is activeUsers | map_list(to u: {
label: "{u.name} ({u.role})",
grade: if u.score is greater than 89 then "A" otherwise "B",
...u
})
-- Step 3: group by role
grouped is {}
for each u in displayUsers:
if grouped[u.role] is nothing:
grouped[u.role] is []
push(grouped[u.role], u)
-- Step 4: sort each group by name
for each role in keys(grouped):
grouped[role] is grouped[role] | sort(to a b: a.name is less than b.name)
-- Display results
for each role in keys(grouped):
say "=== {upper(role)} ==="
for each u in grouped[role]:
say " {u.label} - Grade {u.grade}"
4. Async Patterns
Async programming is how you handle operations that take time: network requests, file I/O, database queries. Quill makes async intuitive with await, parallel, race, and channels.
Basic await vs parallel
Bad: fetching things one after another when they are independent.
-- Bad: sequential (slow) -- each waits for the previous one
users is await fetch("/api/users")
posts is await fetch("/api/posts")
comments is await fetch("/api/comments")
-- Total time: users + posts + comments
Good: use parallel when requests do not depend on each other.
-- Good: parallel (fast) -- all three run at the same time
parallel:
users is fetch("/api/users")
posts is fetch("/api/posts")
comments is fetch("/api/comments")
-- Total time: max(users, posts, comments)
Error handling in async code
Wrap async blocks in try/if it fails. A single failure in parallel cancels the entire block, so use parallel settled when you want all tasks to complete regardless.
-- All-or-nothing: if any request fails, handle the error
try:
parallel:
users is fetchJSON("/api/users")
posts is fetchJSON("/api/posts")
if it fails err:
say "Request failed: {err}"
-- Partial success: each result is Success() or Error()
parallel settled:
a is await fetchJSON("/api/a")
b is await fetchJSON("/api/b")
if a.ok:
say "A succeeded: {a.value}"
otherwise:
say "A failed: {a.error}"
Race conditions and how to avoid them
A race condition occurs when two tasks modify the same data at the same time and produce unpredictable results. The solution in Quill is to use channels instead of shared variables.
Bad: two tasks modifying a shared counter.
-- Bad: both tasks read and write `count` at the same time
count is 0
spawn task a:
repeat 1000 times:
count is count + 1
spawn task b:
repeat 1000 times:
count is count + 1
-- count might not be 2000!
Good: funnel updates through a channel.
-- Good: use a channel to serialize updates
channel updates with buffer 100
spawn task a:
repeat 1000 times:
send 1 to updates
spawn task b:
repeat 1000 times:
send 1 to updates
-- Single consumer: no race condition
count is 0
repeat 2000 times:
val is receive from updates
count is count + val
say count -- always 2000
Channels for producer/consumer
channel jobs with buffer 10
channel results with buffer 10
-- Producer: enqueue work
spawn task producer:
for each url in urls:
send url to jobs
-- Workers: process jobs in parallel
repeat 5 times with i:
spawn task worker:
url is receive from jobs
data is await fetchJSON(url)
send data to results
-- Collector: gather results
repeat length(urls) times:
result is receive from results
say "Got: {result.title}"
Timeouts with select/after
channel response with buffer 1
spawn task fetcher:
data is await fetchJSON("https://slow-api.example.com/data")
send data to response
select:
when receive data from response:
say "Got data: {data.title}"
after 5000:
say "Request timed out after 5 seconds"
cancel fetcher
Full example: fetch from 3 APIs, merge results, with timeout
to fetchDashboardData -> Result of object:
-- Use race to enforce a global timeout
result is race:
fetchAllData()
delay(10000) then give back Error("Dashboard load timed out")
give back result
to fetchAllData -> Result of object:
try:
parallel:
usersRaw is fetchJSON("https://api.example.com/users")
ordersRaw is fetchJSON("https://api.example.com/orders")
statsRaw is fetchJSON("https://api.example.com/stats")
-- Merge the data into a single dashboard object
dashboard is {
totalUsers: length(usersRaw),
recentOrders: ordersRaw | filter(to o: o.status is "pending") | sort(to a b: a.date is greater than b.date),
revenue: statsRaw.revenue,
topProducts: statsRaw.products | sort(to a b: a.sales is greater than b.sales) | take(5)
}
give back Success(dashboard)
if it fails err:
give back Error("Failed to fetch dashboard data: {err}")
-- Usage
match fetchDashboardData():
when Success data:
say "Users: {data.totalUsers}"
say "Pending orders: {length(data.recentOrders)}"
say "Revenue: ${data.revenue}"
when Error msg:
say "Dashboard unavailable: {msg}"
Retry with backoff
Network calls can fail temporarily. A retry pattern with increasing delays handles transient errors gracefully.
to fetchWithRetry url maxRetries -> Result of object:
retries is 0
repeat forever:
try:
data is await fetchJSON(url)
give back Success(data)
if it fails err:
retries is retries + 1
if retries is greater than maxRetries:
give back Error("Failed after {maxRetries} retries: {err}")
-- Exponential backoff: 1s, 2s, 4s, 8s...
waitMs is 1000 * (2 ** (retries - 1))
say "Retry {retries}/{maxRetries} in {waitMs}ms..."
await delay(waitMs)
-- Usage
result is await fetchWithRetry("https://api.example.com/data", 3)
Cancellation pattern
Cancel background tasks when they are no longer needed to avoid wasted resources.
spawn task poller:
repeat forever:
data is await fetchJSON("/api/status")
say "Status: {data.status}"
await delay(5000)
-- Later, when the user navigates away:
cancel poller
say "Polling stopped."
await or wrap in try/if it fails.
5. Testing Patterns
Tests are your safety net. They catch bugs before your users do, and they give you the confidence to refactor without fear. Quill has built-in testing with test and expect -- no frameworks needed.
Arrange / Act / Assert
Structure every test in three steps: set up the data (Arrange), run the code (Act), and check the result (Assert).
test "calculateDiscount applies 10% for orders over $100":
-- Arrange
order is {items: [{price: 60}, {price: 50}], coupon: nothing}
-- Act
discount is calculateDiscount(order)
-- Assert
expect discount is 11
Testing edge cases
Always test the boundaries: empty inputs, zero, negative numbers, null values, very large inputs.
to safeDivide a b -> Result of number:
if b is 0:
give back Error("Division by zero")
give back Success(a / b)
test "safeDivide normal case":
expect safeDivide(10, 2) is Success(5)
test "safeDivide by zero returns Error":
expect safeDivide(10, 0) is Error("Division by zero")
test "safeDivide with negative numbers":
expect safeDivide(-10, 2) is Success(-5)
test "safeDivide with decimals":
expect safeDivide(7, 2) is Success(3.5)
Testing async code
Use await inside test blocks just like anywhere else.
test "fetchUser returns the correct user":
user is await fetchUser(1)
expect user.name is "Alice"
expect user.id is 1
test "fetchUser with invalid id returns Error":
result is await fetchUser(-1)
expect result is Error("User not found")
Mock patterns
Replace real dependencies with predictable stand-ins. Pass dependencies as function arguments to make them swappable in tests.
-- Production code: accepts a fetcher function
to getGreeting userId fetcher:
user is await fetcher(userId)
give back "Hello, {user.name}!"
-- In tests: pass a mock fetcher
to mockFetcher id:
give back {name: "TestUser", id: id}
test "getGreeting uses fetcher correctly":
greeting is await getGreeting(1, mockFetcher)
expect greeting is "Hello, TestUser!"
fetch, accept a fetcher parameter. In production you pass the real one; in tests you pass a fake.
Organizing test files
Put tests next to the code they test, or in a dedicated tests/ directory. Name test files with a .test.quill suffix so quill test discovers them automatically.
-- Option A: tests alongside source
src/
cart.quill
cart.test.quill
-- Option B: tests in a separate directory
src/
cart.quill
tests/
cart.test.quill
Table-driven tests
When you have many inputs to test, use a data list to avoid repeating test boilerplate.
cases are [
{input: "hello", expected: "HELLO"},
{input: "World", expected: "WORLD"},
{input: "", expected: ""},
{input: "123abc", expected: "123ABC"}
]
for each {input, expected} in cases:
test "upper(\"{input}\") returns \"{expected}\"":
expect upper(input) is expected
Testing error cases
Test that your functions fail correctly, not just that they succeed.
test "parseAge rejects non-numeric input":
result is parseAge("abc")
expect result is Error("'abc' is not a valid number")
test "parseAge rejects negative age":
result is parseAge("-5")
expect result is Error("Age cannot be negative")
test "parseAge accepts valid age":
result is parseAge("25")
expect result is Success(25)
Full example: testing a shopping cart
-- shopping-cart.quill
describe ShoppingCart:
items are []
to addItem name price quantity:
push(my.items, {name: name, price: price, quantity: quantity})
to removeItem name:
my.items is my.items | filter(to i: i.name is not name)
to subtotal:
give back my.items | map_list(to i: i.price * i.quantity) | reduce(to a b: a + b, 0)
to applyDiscount percent:
if percent is less than 0 or percent is greater than 100:
give back Error("Invalid discount: {percent}%")
give back Success(my.subtotal() * (1 - percent / 100))
to itemCount:
give back my.items | map_list(to i: i.quantity) | reduce(to a b: a + b, 0)
to clear:
my.items are []
-- Tests
test "empty cart has zero subtotal":
cart is new ShoppingCart()
expect cart.subtotal() is 0
test "empty cart has zero item count":
cart is new ShoppingCart()
expect cart.itemCount() is 0
test "addItem increases subtotal":
cart is new ShoppingCart()
cart.addItem("Widget", 9.99, 2)
expect cart.subtotal() is 19.98
test "addItem increases item count":
cart is new ShoppingCart()
cart.addItem("Widget", 9.99, 3)
expect cart.itemCount() is 3
test "multiple items sum correctly":
cart is new ShoppingCart()
cart.addItem("Widget", 10, 1)
cart.addItem("Gadget", 25, 2)
expect cart.subtotal() is 60
expect cart.itemCount() is 3
test "removeItem removes by name":
cart is new ShoppingCart()
cart.addItem("Widget", 10, 1)
cart.addItem("Gadget", 25, 2)
cart.removeItem("Widget")
expect cart.subtotal() is 50
expect cart.itemCount() is 2
test "removeItem with nonexistent name does nothing":
cart is new ShoppingCart()
cart.addItem("Widget", 10, 1)
cart.removeItem("Nonexistent")
expect cart.subtotal() is 10
test "applyDiscount 10% on $100":
cart is new ShoppingCart()
cart.addItem("Item", 100, 1)
expect cart.applyDiscount(10) is Success(90)
test "applyDiscount 0% returns full price":
cart is new ShoppingCart()
cart.addItem("Item", 50, 1)
expect cart.applyDiscount(0) is Success(50)
test "applyDiscount rejects negative percent":
cart is new ShoppingCart()
cart.addItem("Item", 50, 1)
expect cart.applyDiscount(-5) is Error("Invalid discount: -5%")
test "applyDiscount rejects over 100%":
cart is new ShoppingCart()
cart.addItem("Item", 50, 1)
expect cart.applyDiscount(150) is Error("Invalid discount: 150%")
test "clear empties the cart":
cart is new ShoppingCart()
cart.addItem("Widget", 10, 5)
cart.clear()
expect cart.subtotal() is 0
expect cart.itemCount() is 0
6. Security Patterns
Security is not a feature you bolt on at the end. It is a set of habits you practice from the beginning. These patterns protect your users and your application from the most common vulnerabilities.
Never hardcode secrets
Bad: secrets in source code.
-- Bad: anyone who reads the code can see your key
API_KEY is "sk-abc123secret456"
DB_PASSWORD is "hunter2"
Good: load secrets from the environment.
-- Good: secrets live in .env or your deployment config
API_KEY is env("API_KEY")
DB_PASSWORD is env("DB_PASSWORD")
if API_KEY is nothing:
say "Error: API_KEY environment variable is not set"
exit(1)
.env files to version control. Add .env to your .gitignore immediately.
Hashing passwords with argon2
Bad: storing passwords in plain text.
-- Bad: passwords stored as plain text -- one breach exposes everyone
to createUser name password:
user is {name: name, password: password}
saveToDatabase(user)
Good: hash passwords with argon2 before storing.
-- Good: hash the password, store only the hash
to createUser name password:
hashedPassword is await argon2(password)
user is {name: name, password: hashedPassword}
saveToDatabase(user)
Encrypting sensitive data at rest
-- Encrypt before saving
SECRET_KEY is env("ENCRYPTION_KEY")
encryptedSSN is encrypt(user.ssn, SECRET_KEY)
saveToDatabase({...user, ssn: encryptedSSN})
-- Decrypt when reading
encryptedSSN is loadFromDatabase(userId).ssn
plainSSN is decrypt(encryptedSSN, SECRET_KEY)
Timing-safe comparison for tokens
Regular string comparison (is) leaks information through timing. An attacker can measure how long the comparison takes to guess your token character by character. Use constantTimeEqual instead.
Bad:
-- Bad: timing attack possible
if token is expectedToken:
say "Authenticated"
Good:
-- Good: constant-time comparison, no timing leak
if constantTimeEqual(token, expectedToken):
say "Authenticated"
Input validation
Never trust user input. Validate type, length, and format before using it.
to validateEmail input -> Result of text:
if isText(input) is no:
give back Error("Email must be text")
if length(input) is greater than 254:
give back Error("Email too long")
if includes(input, "@") is no:
give back Error("Email must contain @")
give back Success(trim(lower(input)))
to validatePassword input -> Result of text:
if length(input) is less than 8:
give back Error("Password must be at least 8 characters")
if length(input) is greater than 128:
give back Error("Password must be at most 128 characters")
give back Success(input)
Full example: a secure login system
-- secure-auth.quill
-- A complete authentication system with proper password hashing,
-- token generation, and validation.
TOKEN_SECRET is env("TOKEN_SECRET")
TOKEN_EXPIRY_MS is 3600000 -- 1 hour
-- Register a new user
to register email password -> Result of object:
-- Validate inputs
cleanEmail is validateEmail(email)?
cleanPassword is validatePassword(password)?
-- Check if user already exists
existing is findUserByEmail(cleanEmail)
if existing is not nothing:
give back Error("A user with this email already exists")
-- Hash the password (never store plain text)
hashedPassword is await argon2(cleanPassword)
-- Save user
user is {
id: uuid(),
email: cleanEmail,
password: hashedPassword,
createdAt: now()
}
saveToDatabase(user)
give back Success({id: user.id, email: user.email})
-- Log in an existing user
to login email password -> Result of object:
cleanEmail is validateEmail(email)?
-- Look up the user
user is findUserByEmail(cleanEmail)
if user is nothing:
give back Error("Invalid email or password")
-- Verify password against stored hash
passwordOk is await argon2(password, user.password)
if passwordOk is no:
give back Error("Invalid email or password")
-- Generate a session token
token is generateToken(user.id)
give back Success({token: token, userId: user.id})
-- Generate a signed token
to generateToken userId -> text:
payload is toJSON({userId: userId, exp: now() + TOKEN_EXPIRY_MS})
signature is hmac(payload, TOKEN_SECRET)
give back toBase64(payload) + "." + toBase64(signature)
-- Validate an incoming token
to validateToken token -> Result of object:
parts is split(token, ".")
if length(parts) is not 2:
give back Error("Malformed token")
[payloadB64, sigB64] is parts
payload is fromBase64(payloadB64)
expectedSig is toBase64(hmac(payload, TOKEN_SECRET))
-- Timing-safe comparison to prevent timing attacks
if constantTimeEqual(sigB64, expectedSig) is no:
give back Error("Invalid token signature")
data is parseJSON(payload)
if data.exp is less than now():
give back Error("Token expired")
give back Success({userId: data.userId})
Rate limiting
Protect login endpoints from brute-force attacks by tracking failed attempts.
-- Simple in-memory rate limiter
failedAttempts is {}
MAX_ATTEMPTS is 5
LOCKOUT_MS is 900000 -- 15 minutes
to checkRateLimit email -> Result of boolean:
record is failedAttempts[email]
if record is not nothing:
if record.count is greater than MAX_ATTEMPTS:
if now() - record.lastAttempt is less than LOCKOUT_MS:
give back Error("Too many failed attempts. Try again later.")
-- Lockout period expired, reset counter
failedAttempts[email] is nothing
give back Success(yes)
to recordFailedLogin email:
if failedAttempts[email] is nothing:
failedAttempts[email] is {count: 0, lastAttempt: now()}
failedAttempts[email].count is failedAttempts[email].count + 1
failedAttempts[email].lastAttempt is now()
Sanitizing output
When displaying user-supplied data in HTML, always sanitize to prevent cross-site scripting (XSS).
-- Bad: user input rendered directly as HTML
page is html`<h1>{userInput}</h1>`
-- Good: Quill's html tagged template auto-escapes interpolated values
page is html`<h1>{userInput}</h1>` -- html`` escapes < > & " '
7. Performance Patterns
Write clear code first, then optimize only when measurements show you need to. But knowing these patterns from the start helps you avoid the most common performance traps.
Lazy evaluation with iterators
When working with large datasets, lazy evaluation lets you process items one at a time without loading everything into memory. Use range, filter, map_list, and take in a pipe chain -- the computation only runs when you consume the results.
Bad: creating huge intermediate lists.
-- Bad: creates 3 full lists of 1,000,000 items each
numbers are range(1, 1000000)
evens are filter(numbers, to n: n % 2 is 0)
doubled are map_list(evens, to n: n * 2)
firstTen are take(doubled, 10)
Good: pipe into a lazy chain -- only processes 10 items.
-- Good: lazy -- only the 10 items you need are computed
firstTen is range(1, 1000000)
| filter(to n: n % 2 is 0)
| map_list(to n: n * 2)
| take(10)
Avoiding unnecessary copies
Spread creates a new object or list every time. In a tight loop, this is wasteful if you just need to read the data.
Bad: copying on every iteration.
-- Bad: creates 1000 new objects for no reason
for each user in users:
copy is {...user}
say copy.name
Good: just read the original.
-- Good: read directly, no copy needed
for each user in users:
say user.name
Parallel execution for independent tasks
When you have several independent I/O operations, run them in parallel.
-- Bad: 3 sequential fetches (total time = sum of all three)
a is await fetchJSON("/api/a")
b is await fetchJSON("/api/b")
c is await fetchJSON("/api/c")
-- Good: 3 parallel fetches (total time = slowest one)
parallel:
a is fetchJSON("/api/a")
b is fetchJSON("/api/b")
c is fetchJSON("/api/c")
Caching expensive computations
If a function is called repeatedly with the same arguments and the result does not change, cache it.
-- Simple memoization cache
cache is {}
to expensiveLookup key:
if cache[key] is not nothing:
give back cache[key]
result is await fetchJSON("https://api.example.com/data/{key}")
cache[key] is result
give back result
-- First call: fetches from API
data is expensiveLookup("user-123")
-- Second call: instant, from cache
data is expensiveLookup("user-123")
Cache with TTL (time to live)
Prevent stale data by expiring cache entries after a set duration.
cache is {}
CACHE_TTL is 60000 -- 1 minute
to cachedFetch key url:
entry is cache[key]
if entry is not nothing:
if now() - entry.timestamp is less than CACHE_TTL:
give back entry.data
data is await fetchJSON(url)
cache[key] is {data: data, timestamp: now()}
give back data
Batch operations
When you need to perform many database inserts or API calls, batch them instead of doing one at a time.
Bad: one insert per loop iteration.
-- Bad: 1000 separate database calls
for each record in records:
await db.insert(record)
Good: insert all at once.
-- Good: one batch insert
await db.insertMany(records)
Avoid repeated work in loops
Compute values outside the loop when possible.
-- Bad: calls length() on every iteration
repeat length(items) times with i:
if i is less than length(items) / 2:
say items[i]
-- Good: compute once, reuse
total is length(items)
half is total / 2
repeat total times with i:
if i is less than half:
say items[i]
Full example: processing a large dataset efficiently
-- Process a large CSV of sales records efficiently.
-- Goal: find the top 10 customers by total spend.
to topCustomers csvPath -> list:
-- Read the file as lines (lazy stream)
lines is readFile(csvPath) | split("\n")
-- Skip the header, parse each line into a record
records is lines
| filter(to line: line is not "")
| map_list(to line:
parts is split(line, ",")
{customer: parts[0], amount: toNumber(parts[1])}
)
-- Aggregate: sum amounts per customer
totals is {}
for each {customer, amount} in records:
if totals[customer] is nothing:
totals[customer] is 0
totals[customer] is totals[customer] + amount
-- Convert to a sorted list and take top 10
result is entries(totals)
| map_list(to e: {customer: e[0], total: e[1]})
| sort(to a b: a.total is greater than b.total)
| take(10)
give back result
-- Usage
top is topCustomers("sales.csv")
for each {customer, total} in top:
say "{customer}: ${total}"
8. Common Idioms
Idiomatic Quill reads almost like English. These patterns show you how to write code that experienced Quill developers would recognize immediately as clean and natural.
Using match instead of long if/else chains
Before (verbose):
if status is "pending":
handlePending()
otherwise if status is "approved":
handleApproved()
otherwise if status is "rejected":
handleRejected()
otherwise if status is "cancelled":
handleCancelled()
otherwise:
handleUnknown()
After (idiomatic):
match status:
when "pending": handlePending()
when "approved": handleApproved()
when "rejected": handleRejected()
when "cancelled": handleCancelled()
otherwise: handleUnknown()
Pipe operator for readability
Before:
output is join(sort(unique(split(lower(trim(input)), " "))), ", ")
After:
output is input
| trim
| lower
| split(" ")
| unique
| sort
| join(", ")
Default values with or
Before:
if config.name is not nothing:
name is config.name
otherwise:
name is "default"
After:
name is config.name or "default"
Guard clauses -- early return for invalid input
Before (deeply nested):
to processPayment order:
if order is not nothing:
if order.total is greater than 0:
if order.paymentMethod is not nothing:
-- finally, the actual logic buried 3 levels deep
charge(order)
otherwise:
give back Error("No payment method")
otherwise:
give back Error("Order total must be positive")
otherwise:
give back Error("Order is missing")
After (guard clauses):
to processPayment order:
if order is nothing:
give back Error("Order is missing")
if order.total is less than 1:
give back Error("Order total must be positive")
if order.paymentMethod is nothing:
give back Error("No payment method")
-- All checks passed, do the real work
charge(order)
Destructuring in function parameters
When a function takes an object, destructure in the signature to make it clear what fields are expected.
Before:
to formatAddress address:
give back "{address.street}, {address.city}, {address.state} {address.zip}"
After:
to formatAddress {street, city, state, zip}:
give back "{street}, {city}, {state} {zip}"
Using are for readability
Use are instead of is when the variable holds a collection. They work identically, but are reads more naturally.
-- Less readable
names is ["Alice", "Bob", "Carol"]
-- More readable
names are ["Alice", "Bob", "Carol"]
Chaining with match for control flow
Use match to handle different shapes of data cleanly.
to handleResponse response:
match response:
when {status: 200, body}:
give back Success(parseJSON(body))
when {status: 404}:
give back Error("Resource not found")
when {status: 429}:
give back Error("Rate limited, try again later")
when {status} if status is greater than 499:
give back Error("Server error: {status}")
otherwise:
give back Error("Unexpected status: {response.status}")
10 before/after comparisons
Here is a quick reference of common patterns showing verbose Quill versus idiomatic Quill.
1. Conditional assignment
-- Before
if age is greater than 17:
label is "adult"
otherwise:
label is "minor"
-- After
label is if age is greater than 17 then "adult" otherwise "minor"
2. Checking for existence
-- Before
if user.nickname is not nothing:
displayName is user.nickname
otherwise:
displayName is user.name
-- After
displayName is user.nickname or user.name
3. Building a string from a list
-- Before
result is ""
for each name in names:
result is result + name + ", "
-- After
result is names | join(", ")
4. Filtering and counting
-- Before
count is 0
for each user in users:
if user.active:
count is count + 1
-- After
count is users | filter(to u: u.active) | length
5. Extracting a single field from a list of objects
-- Before
names are []
for each user in users:
push(names, user.name)
-- After
names is users | map_list(to u: u.name)
6. Finding an item
-- Before
found is nothing
for each user in users:
if user.id is targetId:
found is user
-- After
found is users | find(to u: u.id is targetId)
7. Merging defaults with overrides
-- Before
config is {}
config.host is defaults.host
config.port is defaults.port
config.debug is defaults.debug
if overrides.host is not nothing:
config.host is overrides.host
if overrides.port is not nothing:
config.port is overrides.port
-- After
config is {...defaults, ...overrides}
8. Error handling with fallback
-- Before
result is loadUser(42)
match result:
when Success user:
name is user.name
when Error msg:
name is "anonymous"
-- After
name is try loadUser(42).name otherwise "anonymous"
9. Transforming data for display
-- Before
display are []
for each order in orders:
formatted is {
label: order.id + ": " + order.product,
total: "$" + toText(order.amount)
}
push(display, formatted)
-- After
display is orders | map_list(to o: {
label: "{o.id}: {o.product}",
total: "${o.amount}"
})
10. Multiple return conditions
-- Before
to classify score:
if score is greater than 89:
give back "A"
otherwise if score is greater than 79:
give back "B"
otherwise if score is greater than 69:
give back "C"
otherwise:
give back "F"
-- After
to classify score:
match score:
when n if n is greater than 89: give back "A"
when n if n is greater than 79: give back "B"
when n if n is greater than 69: give back "C"
otherwise: give back "F"
Next steps
- Read the Language Reference for complete syntax details
- Explore the Standard Library to discover all built-in functions
- Check out Testing for more on writing and running tests
- Try the Playground to experiment with these patterns in your browser
- Browse Examples for complete working programs