905 lines
36 KiB
HTML
905 lines
36 KiB
HTML
|
<!DOCTYPE html>
|
|||
|
<html lang="en">
|
|||
|
<head>
|
|||
|
<meta name="generator" content=
|
|||
|
"HTML Tidy for HTML5 for Linux version 5.2.0">
|
|||
|
<title>Testing</title>
|
|||
|
<meta charset="utf-8">
|
|||
|
<meta name="description" content="A collection of examples of using Common Lisp">
|
|||
|
<meta name="viewport" content=
|
|||
|
"width=device-width, initial-scale=1">
|
|||
|
<link rel="stylesheet" href=
|
|||
|
"assets/style.css">
|
|||
|
<script type="text/javascript" src=
|
|||
|
"assets/highlight-lisp.js">
|
|||
|
</script>
|
|||
|
<script type="text/javascript" src=
|
|||
|
"assets/jquery-3.2.1.min.js">
|
|||
|
</script>
|
|||
|
<script type="text/javascript" src=
|
|||
|
"assets/jquery.toc/jquery.toc.min.js">
|
|||
|
</script>
|
|||
|
<script type="text/javascript" src=
|
|||
|
"assets/toggle-toc.js">
|
|||
|
</script>
|
|||
|
|
|||
|
<link rel="stylesheet" href=
|
|||
|
"assets/github.css">
|
|||
|
|
|||
|
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css" integrity="sha384-BVYiiSIFeK1dGmJRAkycuHAHRg32OmUcww7on3RYdg4Va+PmSTsz/K68vbdEjh4u" crossorigin="anonymous">
|
|||
|
</head>
|
|||
|
<body>
|
|||
|
<h1 id="title-xs"><a href="index.html">The Common Lisp Cookbook</a> – Testing</h1>
|
|||
|
<div id="logo-container">
|
|||
|
<a href="index.html">
|
|||
|
<img id="logo" src="assets/cl-logo-blue.png"/>
|
|||
|
</a>
|
|||
|
|
|||
|
<div id="searchform-container">
|
|||
|
<form onsubmit="duckSearch()" action="javascript:void(0)">
|
|||
|
<input id="searchField" type="text" value="" placeholder="Search...">
|
|||
|
</form>
|
|||
|
</div>
|
|||
|
|
|||
|
<div id="toc-container" class="toc-close">
|
|||
|
<div id="toc-title">Table of Contents</div>
|
|||
|
<ul id="toc" class="list-unstyled"></ul>
|
|||
|
</div>
|
|||
|
</div>
|
|||
|
|
|||
|
<div id="content-container">
|
|||
|
<h1 id="title-non-xs"><a href="index.html">The Common Lisp Cookbook</a> – Testing</h1>
|
|||
|
|
|||
|
<!-- Announcement we can keep for 1 month or more. I remove it and re-add it from time to time. -->
|
|||
|
<p class="announce">
|
|||
|
📹 <a href="https://www.udemy.com/course/common-lisp-programming/?couponCode=6926D599AA-LISP4ALL">NEW! Learn Lisp in videos and support our contributors with this 40% discount.</a>
|
|||
|
</p>
|
|||
|
<p class="announce-neutral">
|
|||
|
📕 <a href="index.html#download-in-epub">Get the EPUB and PDF</a>
|
|||
|
</p>
|
|||
|
|
|||
|
|
|||
|
<div id="content"
|
|||
|
<p>So you want to easily test the code you’re writing? The following
|
|||
|
recipe covers how to write automated tests and see their code
|
|||
|
coverage. We also give pointers to plug those in modern continuous
|
|||
|
integration services like GitHub Actions, Gitlab CI, Travis CI or Coveralls.</p>
|
|||
|
|
|||
|
<p>We will be using a mature testing framework called
|
|||
|
<a href="https://github.com/lispci/fiveam">FiveAM</a>. It supports test suites,
|
|||
|
random testing, test fixtures (to a certain extent) and, of course,
|
|||
|
interactive development.</p>
|
|||
|
|
|||
|
<p>Previously on the Cookbook, the recipe was cooked with <a href="https://github.com/fukamachi/prove">Prove</a>. It used to be a widely liked testing framework but, because of some shortcomings, its repository was later archived. Its successor <a href="https://github.com/fukamachi/rove">Rove</a> is not stable enough and lacks some features, so we didn’t pick it. There are also some <a href="https://github.com/CodyReichert/awesome-cl#unit-testing">other testing frameworks</a> to explore if you feel like it.</p>
|
|||
|
|
|||
|
<p>FiveAM has an <a href="https://common-lisp.net/project/fiveam/docs/index.html">API documentation</a>. You may inspect it or simply read the docstrings in code. Most of the time, they would provide sufficient information that answers your questions… if you didn’t find them here. Let’s get started.</p>
|
|||
|
|
|||
|
<h2 id="testing-with-fiveam">Testing with FiveAM</h2>
|
|||
|
|
|||
|
<p>FiveAM has 3 levels of abstraction: check, test and suite. As you may have guessed:</p>
|
|||
|
|
|||
|
<ol>
|
|||
|
<li>A <strong>check</strong> is a single assertion that checks that its argument is truthy. The most used check is <code>is</code>. For example, <code>(is (= 2 (+ 1 1)))</code>.</li>
|
|||
|
<li>A <strong>test</strong> is the smallest runnable unit. A test case may contain multiple checks. Any check failure leads to the failure of the whole test.</li>
|
|||
|
<li>A <strong>suite</strong> is a collection of tests. When a suite is run, all tests inside would be performed. A suite allows paternity, which means that running a suite will run all the tests defined in it and in its children suites.</li>
|
|||
|
</ol>
|
|||
|
|
|||
|
<p>A simple code sample containing the 3 basic blocks mentioned above can be shown as follows:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(def-suite* my-suite)
|
|||
|
|
|||
|
(test my-test
|
|||
|
(is (= 2 (+ 1 1))))
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>It is totally up to the user to decide the hierarchy of tests and suites. Here we mainly focus on the usage of FiveAM.</p>
|
|||
|
|
|||
|
<p>Suppose we have built a rather complex system and the following functions are part of it:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">;; We have a custom "file doesn't exist" condition.
|
|||
|
(define-condition file-not-existing-error (error)
|
|||
|
((filename :type string :initarg :filename :reader filename)))
|
|||
|
|
|||
|
;; We have a function that tries to read a file and signals the above condition
|
|||
|
;; if the file doesn't exist.
|
|||
|
(defun read-file-as-string (filename &key (error-if-not-exists t))
|
|||
|
"Read file content as string. FILENAME specifies the path of file.
|
|||
|
|
|||
|
Keyword ERROR-IF-NOT-EXISTS specifies the operation to perform when the file
|
|||
|
is not found. T (by default) means an error will be signaled. When given NIL,
|
|||
|
the function will return NIL in that case."
|
|||
|
(cond
|
|||
|
((uiop:file-exists-p filename)
|
|||
|
(uiop:read-file-string filename))
|
|||
|
(error-if-not-exists
|
|||
|
(error 'file-not-existing-error :filename filename))
|
|||
|
(t nil)))
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>We will write tests for that code. In particular, we must ensure:</p>
|
|||
|
|
|||
|
<ul>
|
|||
|
<li>that the content read in a file is the expected content,</li>
|
|||
|
<li>that the condition is signaled if the file doesn’t exist.</li>
|
|||
|
</ul>
|
|||
|
|
|||
|
<h3 id="install-and-load">Install and load</h3>
|
|||
|
|
|||
|
<p><code>FiveAM</code> is in Quicklisp and can be loaded with the following command:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(ql:quickload "fiveam")
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>The package is named <code>fiveam</code> with a nickname <code>5am</code>. For the sake of simplicity, we will ignore the package prefix in the following code samples.</p>
|
|||
|
|
|||
|
<p>It is like we <code>:use</code>d fiveam in our test package definition. You
|
|||
|
can also follow along in the REPL with <code>(use-package :fiveam)</code>.</p>
|
|||
|
|
|||
|
<p>Here is a package definition you can use:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(in-package :cl-user)
|
|||
|
(defpackage my-fiveam-test
|
|||
|
(:use :cl
|
|||
|
:fiveam))
|
|||
|
(in-package :my-fiveam-test)
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<h3 id="defining-suites-def-suite-def-suite">Defining suites (<code>def-suite</code>, <code>def-suite*</code>)</h3>
|
|||
|
|
|||
|
<p>Testing in FiveAM usually starts by defining a suite. A suite helps separating tests to smaller collections that makes them more organized. It is highly recommended to define a single <em>root</em> suite for the sake of ASDF integration. We will talk about it later, now let’s focus on the testing itself.</p>
|
|||
|
|
|||
|
<p>The code below defines a suite named <code>my-system</code>. We will use it as the root suite for the whole system.</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(def-suite my-system
|
|||
|
:description "Test my system")
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>Then let’s define another suite for testing the <code>read-file-as-string</code> function.</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">;; Define a suite and set it as the default for the following tests.
|
|||
|
(def-suite read-file-as-string
|
|||
|
:description "Test the read-file-as-string function."
|
|||
|
:in my-system)
|
|||
|
(in-suite read-file-as-string)
|
|||
|
|
|||
|
;; Alternatively, the following line is a combination of the 2 lines above.
|
|||
|
(def-suite* read-file-as-string :in my-system)
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>Here a new suite named <code>read-file-as-string</code> has been defined. It is declared to be a child suite of <code>my-system</code> as specified by the <code>:in</code> keyword. The macro <code>in-suite</code> sets it as the default suite for the tests defined later.</p>
|
|||
|
|
|||
|
<h3 id="defining-tests">Defining tests</h3>
|
|||
|
|
|||
|
<p>Before diving into tests, here is a brief introduction of the available checks you may use inside tests:</p>
|
|||
|
|
|||
|
<ul>
|
|||
|
<li>The <code>is</code> macro is likely the most used check. It simply checks if the given expression returns a true value and generates a <code>test-passed</code> or <code>test-failure</code> result accordingly.</li>
|
|||
|
<li>The <code>skip</code> macro takes a reason and generates a <code>test-skipped</code> result.</li>
|
|||
|
<li>The <code>signals</code> macro checks if the given condition was signaled during execution.</li>
|
|||
|
</ul>
|
|||
|
|
|||
|
<p>There is also:</p>
|
|||
|
|
|||
|
<ul>
|
|||
|
<li><code>finishes</code>: passes if the assertion body executes to normal completion. In other words if body does signal, return-from or throw, then this test fails.</li>
|
|||
|
<li><code>pass</code>: just make the test pass.</li>
|
|||
|
<li><code>is-true</code>: like <code>is</code>, but unlike it this check does not inspect the assertion body to determine how to report the failure. Similarly, there is <code>is-false</code>.</li>
|
|||
|
</ul>
|
|||
|
|
|||
|
<p>Please note that all the checks accept an optional reason, as string, that can be formatted with format directives (see more below). When omitted, FiveAM generates a report that explains the failure according to the arguments passed to the function.</p>
|
|||
|
|
|||
|
<p>The <code>test</code> macro provides a simple way to define a test with a name.</p>
|
|||
|
|
|||
|
<p><em>Note that below, we expect two files to exist: <code>/tmp/hello.txt</code> should contain “hello” and <code>/tmp/empty.txt</code> should be empty.</em></p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">;; Our first "base" case: we read a file that contains "hello".
|
|||
|
(test read-file-as-string-normal-file
|
|||
|
(let ((result (read-file-as-string "/tmp/hello.txt")))
|
|||
|
;; Tip: put the expected value as the first argument of = or equal, string= etc.
|
|||
|
;; FiveAM generates a more readable report following this convention.
|
|||
|
(is (string= "hello" result))))
|
|||
|
|
|||
|
;; We read an empty file.
|
|||
|
(test read-file-as-string-empty-file
|
|||
|
(let ((result (read-file-as-string "/tmp/empty.txt")))
|
|||
|
(is (not (null result)))
|
|||
|
;; The reason can be used to provide formatted text.
|
|||
|
(is (= 0 (length result)))
|
|||
|
"Empty string expected but got ~a" result))
|
|||
|
|
|||
|
;; Now we test that reading a non-existing file signals our condition.
|
|||
|
(test read-file-as-string-non-existing-file
|
|||
|
(let ((result (read-file-as-string "/tmp/non-existing-file.txt"
|
|||
|
:error-if-not-exists nil)))
|
|||
|
(is (null result)
|
|||
|
"Reading a file should return NIL when :ERROR-IF-NOT-EXISTS is set to NIL"))
|
|||
|
;; SIGNALS accepts the unquoted name of a condition and a body to evaluate.
|
|||
|
;; Here it checks if FILE-NOT-EXISTING-ERROR is signaled.
|
|||
|
(signals file-not-existing-error
|
|||
|
(read-file-as-string "/tmp/non-existing-file.txt"
|
|||
|
:error-if-not-exists t)))
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>In the above code, 3 test were defined with 5 checks in total. Some checks were actually redundant for the sake of demonstration. You may put all the checks in one big test, or in multiple scenarios. It is up to you.</p>
|
|||
|
|
|||
|
<p>The macro <code>test</code> is a convenience for <code>def-test</code> to define simple tests. You may read its docstring for a more complete introduction, for example to read about <code>:depends-on</code>.</p>
|
|||
|
|
|||
|
<h3 id="running-tests">Running tests</h3>
|
|||
|
|
|||
|
<p>FiveAm provides multiple ways to run tests. The macro <code>run!</code> is a good start point during development. It accepts a name of suite or test and run it, then prints testing report in standard output. Let’s run the tests now!</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(run! 'my-system)
|
|||
|
; Running test suite MY-SYSTEM
|
|||
|
; Running test READ-FILE-AS-STRING-EMPTY-FILE ..
|
|||
|
; Running test READ-FILE-AS-STRING-NON-EXISTING-FILE ..
|
|||
|
; Running test READ-FILE-AS-STRING-NORMAL-FILE .
|
|||
|
; Did 5 checks.
|
|||
|
; Pass: 5 (100%)
|
|||
|
; Skip: 0 ( 0%)
|
|||
|
; Fail: 0 ( 0%)
|
|||
|
; => T, NIL, NIL
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>If we mess <code>read-file-as-string-non-existing-file</code> up by replacing <code>/tmp/non-existing-file.txt</code> with <code>/tmp/hello.txt</code>, the test would fail (sure!) as expected:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(run! 'read-file-as-string-non-existing-file)
|
|||
|
; Running test READ-FILE-AS-STRING-NON-EXISTING-FILE ff
|
|||
|
; Did 2 checks.
|
|||
|
; Pass: 0 ( 0%)
|
|||
|
; Skip: 0 ( 0%)
|
|||
|
; Fail: 2 (100%)
|
|||
|
; Failure Details:
|
|||
|
; --------------------------------
|
|||
|
; READ-FILE-AS-STRING-NON-EXISTING-FILE []:
|
|||
|
; Should return NIL when :ERROR-IF-NOT-EXISTS is set to NIL.
|
|||
|
; --------------------------------
|
|||
|
; --------------------------------
|
|||
|
; READ-FILE-AS-STRING-NON-EXISTING-FILE []:
|
|||
|
; Failed to signal a FILE-NOT-EXISTING-ERROR.
|
|||
|
; --------------------------------
|
|||
|
; => NIL
|
|||
|
; (#<IT.BESE.FIVEAM::TEST-FAILURE {10064485F3}>
|
|||
|
; #<IT.BESE.FIVEAM::TEST-FAILURE {1006438663}>)
|
|||
|
; NIL
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>The behavior of the suite/test runner can be customized by the <code>*on-failure*</code> variable, which controls what to do when a check failure happens. It can be set to one of the following values:</p>
|
|||
|
|
|||
|
<ul>
|
|||
|
<li><code>:debug</code> to drop to the debugger.</li>
|
|||
|
<li><code>:backtrace</code> to print a backtrace and continue.</li>
|
|||
|
<li><code>NIL</code> (default) to simply continue and print the report.</li>
|
|||
|
</ul>
|
|||
|
|
|||
|
<p>There is also <code>*on-error*</code>.</p>
|
|||
|
|
|||
|
<h4 id="running-tests-as-they-are-compiled">Running tests as they are compiled</h4>
|
|||
|
|
|||
|
<p>Under normal circumstances, a test is written and compiled (with the
|
|||
|
usual <code>C-c C-c</code> in Slime) separately from the moment it is run. If you
|
|||
|
want to run the test when it is defined (with <code>C-c C-c</code>), set this:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(setf fiveam:*run-test-when-defined* t)
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<h3 id="custom-and-shorter-tests-explanations">Custom and shorter tests explanations</h3>
|
|||
|
|
|||
|
<p>We said earlier that a check accepts an optional custom reason that can be formatted with <code>format</code> directives. Here’s a simple example.</p>
|
|||
|
|
|||
|
<p>We are testing a math function:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(test simple-maths
|
|||
|
(is (= 3 (+ 1 1))))
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>When we <code>run!</code> it, we see this somewhat lengthy but informative output (and that’s very important):</p>
|
|||
|
|
|||
|
<pre><code>Running test suite NIL
|
|||
|
Running test SIMPLE-MATHS f
|
|||
|
Did 1 check.
|
|||
|
Pass: 0 ( 0%)
|
|||
|
Skip: 0 ( 0%)
|
|||
|
Fail: 1 (100%)
|
|||
|
|
|||
|
Failure Details:
|
|||
|
--------------------------------
|
|||
|
SIMPLE-MATHS []:
|
|||
|
|
|||
|
(+ 1 1)
|
|||
|
|
|||
|
evaluated to
|
|||
|
|
|||
|
2
|
|||
|
|
|||
|
which is not
|
|||
|
|
|||
|
=
|
|||
|
|
|||
|
to
|
|||
|
|
|||
|
3
|
|||
|
|
|||
|
|
|||
|
--------------------------------
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>Now, we can give it a custom reason:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(test simple-maths
|
|||
|
(is (= 3 (+ 1 1))
|
|||
|
"Maths should work, right? ~a. Another parameter is: ~S" t :foo))
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>And we will see:</p>
|
|||
|
|
|||
|
<pre><code>Running test suite NIL
|
|||
|
Running test SIMPLE-MATHS f
|
|||
|
Did 1 check.
|
|||
|
Pass: 0 ( 0%)
|
|||
|
Skip: 0 ( 0%)
|
|||
|
Fail: 1 (100%)
|
|||
|
|
|||
|
Failure Details:
|
|||
|
--------------------------------
|
|||
|
SIMPLE-MATHS []:
|
|||
|
Maths should work, right? T. Another parameter is: :FOO
|
|||
|
--------------------------------
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<h3 id="fixtures">Fixtures</h3>
|
|||
|
|
|||
|
<p>FiveAM also provides a feature called <strong>fixtures</strong> for setting up
|
|||
|
testing context. The goal is to ensure that some functions are not
|
|||
|
called and always return the same result. Think functions hitting the
|
|||
|
network: you want to isolate the network call in a small function and
|
|||
|
write a fixture so that in your tests, this function always returns
|
|||
|
the same, known result. (But if you do so, you might also need an “end
|
|||
|
to end” test that tests with real data and all your code…)</p>
|
|||
|
|
|||
|
<p>However, FiveAM’s fixture system is nothing more than a macro, it is
|
|||
|
not fully-featured compared to other libraries such as
|
|||
|
<a href="https://github.com/Chream/mockingbird">Mockingbird</a>, and even
|
|||
|
FiveAM’s maintainer encourages to “just use a macro” instead.</p>
|
|||
|
|
|||
|
<p>Mockingbird (and maybe other libraries), in addition to the basic
|
|||
|
feature descibed above, also allows to count the number of times a
|
|||
|
function was called, with what arguments, and so on.</p>
|
|||
|
|
|||
|
<h3 id="random-checking">Random checking</h3>
|
|||
|
|
|||
|
<p>The goal of random testing is to assist the developer in generating
|
|||
|
test cases, and thus, to find cases that the developer would not have
|
|||
|
thought about.</p>
|
|||
|
|
|||
|
<p>We have a few data generators at our disposal, for example:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(gen-float)
|
|||
|
#<CLOSURE (LAMBDA () :IN GEN-FLOAT) {1005A906AB}>
|
|||
|
|
|||
|
(funcall (gen-float))
|
|||
|
9.220082e37
|
|||
|
|
|||
|
(funcall (gen-integer :max 27 :min -16))
|
|||
|
26
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>or again, <code>gen-string</code>, <code>gen-list</code>, <code>gen-tree</code>, <code>gen-buffer</code>, <code>gen-character</code>.</p>
|
|||
|
|
|||
|
<p>And we have a function to run 100 checks, taking each turn a new value from the given generators: <code>for-all</code>:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(test randomtest
|
|||
|
(for-all ((a (gen-integer :min 1 :max 10))
|
|||
|
(b (gen-integer :min 1 :max 10)))
|
|||
|
"Test random tests."
|
|||
|
(is (<= a b))))
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>When you <code>run! 'randomtest</code> this, I expect you will hit an error. You can’t
|
|||
|
possibly always get <code>a</code> lower than <code>b</code>, can you?</p>
|
|||
|
|
|||
|
<p>For more, see <a href="https://common-lisp.net/project/fiveam/docs/Checks.html#Random_0020_0028QuickCheck-ish_0029_0020testing">FiveAM’s documentation</a>.</p>
|
|||
|
|
|||
|
<p>See also <a href="https://github.com/mcandre/cl-quickcheck">cl-quickcheck</a> and <a href="https://github.com/DalekBaldwin/check-it">Check-it</a>, inspired by Haskell’s <a href="https://en.wikipedia.org/wiki/QuickCheck">QuickCheck</a> test framework.</p>
|
|||
|
|
|||
|
<h3 id="asdf-integration">ASDF integration</h3>
|
|||
|
|
|||
|
<p>So it would be nice to provide a one-line trigger to test our <code>my-system</code> system. Recall that we said it is better to provide a root suite? Here is the reason:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(defsystem my-system
|
|||
|
;; Parts omitted.
|
|||
|
:in-order-to ((test-op (test-op :my-system/test))))
|
|||
|
|
|||
|
(defsystem mitogrator/test
|
|||
|
;; Parts omitted.
|
|||
|
:perform (test-op (op c)
|
|||
|
(symbol-call :fiveam :run!
|
|||
|
(find-symbol* :my-system :my-system/test))))
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>The last line tells ASDF to load symbol <code>:my-system</code> from <code>my-system/test</code> package and call <code>fiveam:run!</code>. It fact, it is equivalent to <code>(run! 'my-system)</code> as mentioned above.</p>
|
|||
|
|
|||
|
<h3 id="running-tests-on-the-terminal">Running tests on the terminal</h3>
|
|||
|
|
|||
|
<p>Until now, we ran our tests from our editor’s REPL. How can we run them from a terminal window?</p>
|
|||
|
|
|||
|
<p>As always, the required steps are as follow:</p>
|
|||
|
|
|||
|
<ul>
|
|||
|
<li>start our Lisp</li>
|
|||
|
<li>make sure Quicklisp is enabled (if we have external dependencies)</li>
|
|||
|
<li>load our main system</li>
|
|||
|
<li>load the test system</li>
|
|||
|
<li>run the FiveAM tests.</li>
|
|||
|
</ul>
|
|||
|
|
|||
|
<p>You could put them in a new <code>run-tests.lisp</code> file:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(load "mysystem.lisp")
|
|||
|
(load "mysystem-tests.lisp") ;; <-- where all the FiveAM tests are written.
|
|||
|
(in-package :mysystem-tests)
|
|||
|
|
|||
|
(run!) ;; <-- run all the tests and print the report.
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>and you could invoke it like so, from a source file or from a Makefile:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">rlwrap sbcl --non-interactive --load mysystem.asd --eval '(ql:quickload :mysystem)' --load run-tests.lisp
|
|||
|
;; we assume Quicklisp is installed and loaded. This can be done in the Lisp startup file like .sbclrc.
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>Before going that route however, have a look at the <code>CI-Utils</code> tool
|
|||
|
that we use in the Continuous Integration section below. It provides a
|
|||
|
<code>run-fiveam</code> command that can do all that for you.</p>
|
|||
|
|
|||
|
<p>But let us highlight something you’ll have to take care of if you ran
|
|||
|
your tests like this: the <strong>exit code</strong>. Indeed, <code>(run!)</code> prints a
|
|||
|
report, but it doesn’t say to your Lisp wether the tests were
|
|||
|
successful or not, and wether to exit with an exit code of 0 (for
|
|||
|
success) or more (for errors). So, if your testst were run on a CI
|
|||
|
system, the CI status would be always green, even if tests failed. To
|
|||
|
remedy that, replace <code>run!</code> by:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(let ((result (run!)))
|
|||
|
(cond
|
|||
|
((null result)
|
|||
|
(log:info "Tests failed!") ;; FiveAM printed the report already.
|
|||
|
(uiop:quit 1))
|
|||
|
(t
|
|||
|
(log:info "All pass.")
|
|||
|
(uiop:quit))))
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>Check with <code>echo $?</code> on your shell that the exit code is correct.</p>
|
|||
|
|
|||
|
<h3 id="testing-report-customization">Testing report customization</h3>
|
|||
|
|
|||
|
<p>It is possible to generate our own testing report. The macro <code>run!</code> is nothing more than a composition of <code>explain!</code> and <code>run</code>.</p>
|
|||
|
|
|||
|
<p>Instead of generating a testing report like its cousin <code>run!</code>, the function <code>run</code> runs suite or test passed in and returns a list of <code>test-result</code> instance, usually instances of <code>test-failure</code> or <code>test-passed</code> sub-classes.</p>
|
|||
|
|
|||
|
<p>A class <code>text-explainer</code> is defined as a basic class for testing report generator. A generic function <code>explain</code> is defined to take a <code>text-plainer</code> instance and a <code>test-result</code> instance (returned by <code>run</code>) and generate testing report. The following 2 code snippets are equivalent:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(run! 'read-file-as-string-non-existing-file)
|
|||
|
|
|||
|
(explain (make-instance '5am::detailed-text-explainer)
|
|||
|
(run 'read-file-as-string-non-existing-file))
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>By creating a new sub-class of <code>text-explainer</code> and a method <code>explain</code> for it, it is possible to define a new test reporting system.</p>
|
|||
|
|
|||
|
<p>The following code just provides a proof-of-concept implementation. You may need to read the source code of <code>5am::detailed-text-explainer</code> to fully understand it.</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(defclass my-explainer (5am::text-explainer)
|
|||
|
())
|
|||
|
|
|||
|
(defmethod 5am:explain ((explainer my-explainer) results &optional (stream *standard-output*) recursive-deps)
|
|||
|
(loop for result in results
|
|||
|
do (case (type-of result)
|
|||
|
('5am::test-passed
|
|||
|
(format stream "~%Test ~a passed" (5am::name (5am::test-case result))))
|
|||
|
('5am::test-failure
|
|||
|
(format stream "~%Test ~a failed" (5am::name (5am::test-case result)))))))
|
|||
|
|
|||
|
(explain (make-instace 'my-explainer)
|
|||
|
(run 'read-file-as-string-non-existing-file))
|
|||
|
; Test READ-FILE-AS-STRING-NON-EXISTING-FILE failed
|
|||
|
; Test READ-FILE-AS-STRING-NON-EXISTING-FILE passed => NIL
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<h2 id="interactively-fixing-unit-tests">Interactively fixing unit tests</h2>
|
|||
|
|
|||
|
<p>Common Lisp is interactive by nature (or so are most implementations),
|
|||
|
and testing frameworks make use of it. It is possible to ask the
|
|||
|
framework to open the debugger on a failing test, so that we can
|
|||
|
inspect the stack trace and go to the erroneous line instantly, fix it
|
|||
|
and re-run the test from where it left off, by choosing the suggested
|
|||
|
<em>restart</em>.</p>
|
|||
|
|
|||
|
<p>With FiveAM, set <code>fiveam:*on-failure*</code> to <code>:debug</code>:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(setf fiveam:*on-failure* :debug)
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>You will be dropped into the interactive debugger if an error occurs.</p>
|
|||
|
|
|||
|
<p>Use <code>:backtrace</code> to print a backtrace, continue to run the following tests and print FiveAM’s report.</p>
|
|||
|
|
|||
|
<p>The default is <code>nil</code>: carry on the tests execution and print the report.</p>
|
|||
|
|
|||
|
<!-- epub-exclude-start -->
|
|||
|
|
|||
|
<p>Below is a short screencast showing all this in action:</p>
|
|||
|
|
|||
|
<iframe width="560" height="315" src="https://www.youtube.com/embed/KsHxgP3SRTs" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen=""></iframe>
|
|||
|
|
|||
|
<!-- epub-exclude-end -->
|
|||
|
|
|||
|
<p>Note that in the debugger:</p>
|
|||
|
|
|||
|
<ul>
|
|||
|
<li><code><enter></code> on a backtrace shows more of it</li>
|
|||
|
<li><code>v</code> on a backtrace goes to the corresponding line or function.</li>
|
|||
|
<li>you can discover more options with the menu.</li>
|
|||
|
</ul>
|
|||
|
|
|||
|
<h2 id="code-coverage">Code coverage</h2>
|
|||
|
|
|||
|
<p>A code coverage tool produces a visual output that allows to see what
|
|||
|
parts of our code were tested or not:</p>
|
|||
|
|
|||
|
<p><img src="assets/coverage.png" alt="" title="source: https://www.snellman.net/blog/archive/2007-05-03-code-coverage-tool-for-sbcl.html" /></p>
|
|||
|
|
|||
|
<p>Such capabilities are included into Lisp implementations. For example, SBCL has the
|
|||
|
<a href="http://www.sbcl.org/manual/index.html#sb_002dcover">sb-cover</a> module
|
|||
|
and the feature is also built-in in <a href="https://ccl.clozure.com/docs/ccl.html#code-coverage">CCL</a>
|
|||
|
or <a href="http://www.lispworks.com/documentation/lw71/LW/html/lw-68.htm">LispWorks</a>.</p>
|
|||
|
|
|||
|
<h3 id="generating-an-html-test-coverage-output">Generating an html test coverage output</h3>
|
|||
|
|
|||
|
<p>Let’s do it with SBCL’s <a href="http://www.sbcl.org/manual/index.html#sb_002dcover">sb-cover</a>.</p>
|
|||
|
|
|||
|
<p>Coverage reports are only generated for code compiled using
|
|||
|
<code>compile-file</code> with the value of the <code>sb-cover:store-coverage-data</code>
|
|||
|
optimization quality set to 3.</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">;;; Load SB-COVER
|
|||
|
(require :sb-cover)
|
|||
|
|
|||
|
;;; Turn on generation of code coverage instrumentation in the compiler
|
|||
|
(declaim (optimize sb-cover:store-coverage-data))
|
|||
|
|
|||
|
;;; Load some code, ensuring that it's recompiled with the new optimization
|
|||
|
;;; policy.
|
|||
|
(asdf:oos 'asdf:load-op :cl-ppcre-test :force t)
|
|||
|
|
|||
|
;;; Run the test suite.
|
|||
|
(fiveam:run! yoursystem-test)
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>Produce a coverage report, set the output directory:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(sb-cover:report "coverage/")
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>Finally, turn off instrumentation:</p>
|
|||
|
|
|||
|
<pre><code class="language-lisp">(declaim (optimize (sb-cover:store-coverage-data 0)))
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>You can open your browser at
|
|||
|
<code>../yourproject/t/coverage/cover-index.html</code> to see the report like
|
|||
|
the capture above or like
|
|||
|
<a href="https://www.snellman.net/sbcl/cover/cl-ppcre-report-3/cover-index.html">this code coverage of cl-ppcre</a>.</p>
|
|||
|
|
|||
|
<h2 id="continuous-integration">Continuous Integration</h2>
|
|||
|
|
|||
|
<p>Continuous Integration is important to run automatic tests after a
|
|||
|
commit or before a pull request, to run code quality checks, to build
|
|||
|
and distribute your software… well, to automate everything about software.</p>
|
|||
|
|
|||
|
<p>We want our programs to be portable across Lisp implementations, so
|
|||
|
we’ll set up our CI pipeline to run our tests against several of them (it
|
|||
|
could be SBCL and CCL of course, but while we’re at it ABCL, ECL and
|
|||
|
possibly more).</p>
|
|||
|
|
|||
|
<p>We have a choice of Continuous Integration services: Travis CI, Circle, Gitlab CI, now also GitHub Actions, etc (many existed before GitHub Actions, if you wonder). We’ll have a look at how to configure a CI pipeline for Common Lisp, and we’ll focus a little more on Gitlab CI on the last part.</p>
|
|||
|
|
|||
|
<p>We’ll also quickly show how to publish coverage reports to the <a href="https://coveralls.io/">Coveralls</a> service. <a href="https://github.com/fukamachi/cl-coveralls">cl-coveralls</a> helps to post our coverage to the service.</p>
|
|||
|
|
|||
|
<h3 id="github-actions-circle-ci-travis-with-ci-utils">GitHub Actions, Circle CI, Travis… with CI-Utils</h3>
|
|||
|
|
|||
|
<p>We’ll use <a href="https://neil-lindquist.github.io/CI-Utils/">CI-Utils</a>, a set of utilities that comes with many examples. It also explains more precisely what is a CI system and compares a dozen of services.</p>
|
|||
|
|
|||
|
<p>It relies on <a href="https://github.com/roswell/roswell/">Roswell</a> to install the Lisp implementations and to run the tests. They all are installed with a bash one-liner:</p>
|
|||
|
|
|||
|
<pre><code>curl -L https://raw.githubusercontent.com/roswell/roswell/release/scripts/install-for-ci.sh | bash
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>(note that on the Gitlab CI example, we use a ready-to-use Docker image that contains them all)</p>
|
|||
|
|
|||
|
<p>It also ships with a test runner for FiveAM, which eases some rough parts (like returning the right error code to the terminal). We install ci-utils with Roswell, and we get the <code>run-fiveam</code> executable.</p>
|
|||
|
|
|||
|
<p>Then we can run our tests:</p>
|
|||
|
|
|||
|
<pre><code>run-fiveam -e t -l foo/test :foo-tests # foo is our project
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>Following is the complete <code>.travis.yml</code> file.</p>
|
|||
|
|
|||
|
<p>The first part should be self-explanatory:</p>
|
|||
|
|
|||
|
<pre><code class="language-yml">### Example configuration for Travis CI ###
|
|||
|
language: generic
|
|||
|
|
|||
|
addons:
|
|||
|
homebrew:
|
|||
|
update: true
|
|||
|
packages:
|
|||
|
- roswell
|
|||
|
apt:
|
|||
|
packages:
|
|||
|
- libc6-i386 # needed for a couple implementations
|
|||
|
- default-jre # needed for abcl
|
|||
|
|
|||
|
# Runs each lisp implementation on each of the listed OS
|
|||
|
os:
|
|||
|
- linux
|
|||
|
# - osx # OSX has a long setup on travis, so it's likely easier to just run select implementations on OSX
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>This is how we configure the implementations matrix, to run our tests on several Lisp implementations. We also send the test coverage made with SBCL to Coveralls.</p>
|
|||
|
|
|||
|
<pre><code>env:
|
|||
|
global:
|
|||
|
- PATH=~/.roswell/bin:$PATH
|
|||
|
- ROSWELL_INSTALL_DIR=$HOME/.roswell
|
|||
|
# - COVERAGE_EXCLUDE=t # for rove
|
|||
|
jobs:
|
|||
|
# The implementation and whether coverage is send to coveralls are controlled with these environmental variables
|
|||
|
- LISP=sbcl-bin COVERALLS=true
|
|||
|
- LISP=ccl-bin
|
|||
|
- LISP=abcl
|
|||
|
- LISP=ecl # warn: in our experience, compilations times can be long on ECL.
|
|||
|
|
|||
|
# Additional OS/Lisp combinations can be added to those generated above
|
|||
|
jobs:
|
|||
|
include:
|
|||
|
- os: osx
|
|||
|
env: LISP=sbcl-bin
|
|||
|
- os: osx
|
|||
|
env: LISP=ccl-bin
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>Some jobs can be marked as allowed to fail:</p>
|
|||
|
|
|||
|
<pre><code># Note that this should only be used if there is no interest for the library to work on that system
|
|||
|
# allow_failures:
|
|||
|
# - env: LISP=abcl
|
|||
|
# - env: LISP=ecl
|
|||
|
# - env: LISP=cmucl
|
|||
|
# - env: LISP=alisp
|
|||
|
# os: osx
|
|||
|
|
|||
|
fast_finish: true
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>We finally install Roswell, the implementations, and we run our tests.</p>
|
|||
|
|
|||
|
<pre><code>cache:
|
|||
|
directories:
|
|||
|
- $HOME/.roswell
|
|||
|
- $HOME/.config/common-lisp
|
|||
|
|
|||
|
install:
|
|||
|
- curl -L https://raw.githubusercontent.com/roswell/roswell/release/scripts/install-for-ci.sh | sh
|
|||
|
- ros install ci-utils #for run-fiveam
|
|||
|
# - ros install rove #for [run-] rove
|
|||
|
|
|||
|
# If asdf 3.16 or higher is needed, uncomment the following lines
|
|||
|
#- mkdir -p ~/common-lisp
|
|||
|
#- if [ "$LISP" == "ccl-bin" ]; then git clone https://gitlab.common-lisp.net/asdf/asdf.git ~/common-lisp; fi
|
|||
|
|
|||
|
script:
|
|||
|
- run-fiveam -e t -l foo/test :foo-tests
|
|||
|
#- rove foo.asd
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>Below with Gitlab CI, we’ll use a Docker image that already contains the Lisp binaries and every Debian package required to build Quicklisp libraries.</p>
|
|||
|
|
|||
|
<h3 id="gitlab-ci">Gitlab CI</h3>
|
|||
|
|
|||
|
<p><a href="https://docs.gitlab.com/ce/ci/README.html">Gitlab CI</a> is part of
|
|||
|
Gitlab and is available on <a href="https://gitlab.com/">Gitlab.com</a>, for
|
|||
|
public and private repositories. Let’s see straight away a simple
|
|||
|
<code>.gitlab-ci.yml</code>:</p>
|
|||
|
|
|||
|
<pre><code>variables:
|
|||
|
QUICKLISP_ADD_TO_INIT_FILE: "true"
|
|||
|
|
|||
|
image: clfoundation/sbcl:latest
|
|||
|
|
|||
|
before_script:
|
|||
|
- install-quicklisp
|
|||
|
- git clone https://github.com/foo/bar ~/quicklisp/local-projects/
|
|||
|
|
|||
|
test:
|
|||
|
script:
|
|||
|
- make test
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>Gitlab CI is based on Docker. With <code>image</code> we tell it to use the <code>latest</code> tag
|
|||
|
of the <a href="https://hub.docker.com/r/clfoundation/sbcl/">clfoundation/sbcl</a>
|
|||
|
image. This includes the latest version of SBCL, many OS packages useful for CI
|
|||
|
purposes, and a script to install Quicklisp. Gitlab will load the image, clone
|
|||
|
our project and put us at the project root with administrative rights to run
|
|||
|
the rest of the commands.</p>
|
|||
|
|
|||
|
<p><code>test</code> is a “job” we define, <code>script</code> is a
|
|||
|
recognized keywords that takes a list of commands to run.</p>
|
|||
|
|
|||
|
<p>Suppose we must install dependencies before running our tests: <code>before_script</code>
|
|||
|
will run before each job. Here we install Quicklisp (adding it to SBCL’s init
|
|||
|
file), and clone a library where Quicklisp can find it.</p>
|
|||
|
|
|||
|
<p>We can try locally ourselves. If we already installed <a href="https://docs.docker.com/">Docker</a> and
|
|||
|
started its daemon (<code>sudo service docker start</code>), we can do:</p>
|
|||
|
|
|||
|
<pre><code>docker run --rm -it -v /path/to/local/code:/usr/local/share/common-lisp/source clfoundation/sbcl:latest bash
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>This will download the lisp image (±300MB compressed), mount some local code in
|
|||
|
the image where indicated, and drop us in bash. Now we can try a <code>make test</code>.</p>
|
|||
|
|
|||
|
<p>Here is a more complete example that tests against several CL implementations
|
|||
|
in parallel:</p>
|
|||
|
|
|||
|
<pre><code>variables:
|
|||
|
IMAGE_TAG: latest
|
|||
|
QUICKLISP_ADD_TO_INIT_FILE: "true"
|
|||
|
QUICKLISP_DIST_VERSION: latest
|
|||
|
|
|||
|
image: clfoundation/$LISP:$IMAGE_TAG
|
|||
|
|
|||
|
stages:
|
|||
|
- test
|
|||
|
- build
|
|||
|
|
|||
|
before_script:
|
|||
|
- install-quicklisp
|
|||
|
- git clone https://github.com/foo/bar ~/quicklisp/local-projects/
|
|||
|
|
|||
|
.test:
|
|||
|
stage: test
|
|||
|
script:
|
|||
|
- make test
|
|||
|
|
|||
|
abcl test:
|
|||
|
extends: .test
|
|||
|
variables:
|
|||
|
LISP: abcl
|
|||
|
|
|||
|
ccl test:
|
|||
|
extends: .test
|
|||
|
variables:
|
|||
|
LISP: ccl
|
|||
|
|
|||
|
ecl test:
|
|||
|
extends: .test
|
|||
|
variables:
|
|||
|
LISP: ecl
|
|||
|
|
|||
|
sbcl test:
|
|||
|
extends: .test
|
|||
|
variables:
|
|||
|
LISP: sbcl
|
|||
|
|
|||
|
build:
|
|||
|
stage: build
|
|||
|
variables:
|
|||
|
LISP: sbcl
|
|||
|
only:
|
|||
|
- tags
|
|||
|
script:
|
|||
|
- make build
|
|||
|
artifacts:
|
|||
|
paths:
|
|||
|
- some-file-name
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>Here we defined two <code>stages</code> (see
|
|||
|
<a href="https://docs.gitlab.com/ee/ci/environments/">environments</a>),
|
|||
|
“test” and “build”, defined to run one after another. A “build” stage
|
|||
|
will start only if the “test” one succeeds.</p>
|
|||
|
|
|||
|
<p>“build” is asked to run <code>only</code> when a
|
|||
|
new tag is pushed, not at every commit. When it succeeds, it will make
|
|||
|
the files listed in <code>artifacts</code>’s <code>paths</code> available for download. We can
|
|||
|
download them from Gitlab’s Pipelines UI, or with an url. This one will download
|
|||
|
the file “some-file-name” from the latest “build” job:</p>
|
|||
|
|
|||
|
<pre><code>https://gitlab.com/username/project-name/-/jobs/artifacts/master/raw/some-file-name?job=build
|
|||
|
</code></pre>
|
|||
|
|
|||
|
<p>When the pipelines pass, you will see:</p>
|
|||
|
|
|||
|
<p><img src="assets/img-ci-build.png" alt="" /></p>
|
|||
|
|
|||
|
<p>You now have a ready to use Gitlab CI.</p>
|
|||
|
|
|||
|
<h2 id="emacs-integration-running-tests-using-slite">Emacs integration: running tests using Slite</h2>
|
|||
|
|
|||
|
<p><a href="https://github.com/tdrhq/slite">Slite</a> stands for SLIme TEst runner. It allows you to see the summary of test failures, jump to test definitions, rerun tests with the debugger… all from inside Emacs. We get a dashboard-like buffer with green and red badges, from where we can act on tests. It makes the testing process <em>even more</em> integrated and interactive.</p>
|
|||
|
|
|||
|
<p>It consists of an ASDF system and an Emacs package. It is a new project (it appeared mid 2021) so, as of September 2021, neither can be installed via Quicklisp or MELPA yet. Please refer to its <a href="https://github.com/tdrhq/slite">repository</a> for instructions.</p>
|
|||
|
|
|||
|
<h2 id="references">References</h2>
|
|||
|
|
|||
|
<ul>
|
|||
|
<li><a href="http://turtleware.eu/posts/Tutorial-Working-with-FiveAM.html">Tutorial: Working with FiveAM</a>, by Tomek “uint” Kurcz</li>
|
|||
|
<li><a href="https://sabracrolleton.github.io/testing-framework">Comparison of Common Lisp Testing Frameworks</a>, by Sabra Crolleton.</li>
|
|||
|
<li>the <a href="https://hub.docker.com/u/clfoundation">CL Foundation Docker images</a></li>
|
|||
|
</ul>
|
|||
|
|
|||
|
<h2 id="see-also">See also</h2>
|
|||
|
|
|||
|
<ul>
|
|||
|
<li><a href="https://github.com/vindarel/cl-cookieproject">cl-cookieproject</a>, a project skeleton with a FiveAM tests structure.</li>
|
|||
|
</ul>
|
|||
|
|
|||
|
|
|||
|
<p class="page-source">
|
|||
|
Page source: <a href="https://github.com/LispCookbook/cl-cookbook/blob/master/testing.md">testing.md</a>
|
|||
|
</p>
|
|||
|
</div>
|
|||
|
|
|||
|
<script type="text/javascript">
|
|||
|
|
|||
|
// Don't write the TOC on the index.
|
|||
|
if (window.location.pathname != "/cl-cookbook/") {
|
|||
|
$("#toc").toc({
|
|||
|
content: "#content", // will ignore the first h1 with the site+page title.
|
|||
|
headings: "h1,h2,h3,h4"});
|
|||
|
}
|
|||
|
|
|||
|
$("#two-cols + ul").css({
|
|||
|
"column-count": "2",
|
|||
|
});
|
|||
|
$("#contributors + ul").css({
|
|||
|
"column-count": "4",
|
|||
|
});
|
|||
|
</script>
|
|||
|
|
|||
|
|
|||
|
|
|||
|
<div>
|
|||
|
<footer class="footer">
|
|||
|
<hr/>
|
|||
|
© 2002–2021 the Common Lisp Cookbook Project
|
|||
|
</footer>
|
|||
|
|
|||
|
</div>
|
|||
|
<div id="toc-btn">T<br>O<br>C</div>
|
|||
|
</div>
|
|||
|
|
|||
|
<script text="javascript">
|
|||
|
HighlightLisp.highlight_auto({className: null});
|
|||
|
</script>
|
|||
|
|
|||
|
<script type="text/javascript">
|
|||
|
function duckSearch() {
|
|||
|
var searchField = document.getElementById("searchField");
|
|||
|
if (searchField && searchField.value) {
|
|||
|
var query = escape("site:lispcookbook.github.io/cl-cookbook/ " + searchField.value);
|
|||
|
window.location.href = "https://duckduckgo.com/?kj=b2&kf=-1&ko=1&q=" + query;
|
|||
|
// https://duckduckgo.com/params
|
|||
|
// kj=b2: blue header in results page
|
|||
|
// kf=-1: no favicons
|
|||
|
}
|
|||
|
}
|
|||
|
</script>
|
|||
|
|
|||
|
<script async defer data-domain="lispcookbook.github.io/cl-cookbook" src="https://plausible.io/js/plausible.js"></script>
|
|||
|
|
|||
|
</body>
|
|||
|
</html>
|