I’ve stumbled across a mystery. Looking for other people’s solutions I found this one, that I adapted just slightly to match Eric’s formulation of the puzzle:

```
(defn primes-to-n
[n]
(filter (fn [num]
(loop [end (int (Math/sqrt num)), div 2, re (rem num div)]
(cond
(< num 2) false
(= num 2) true
(= re 0) false
(> div end) true
:else (recur end (inc div) (rem num div))))) (range (inc n))))
```

Since I can see that **abhilater** uses one of the refinements I saw mentioned on the Wikipedia article I added that refinement to my implementation (I was also inspired to use `(rem)`

for determining divisibility):

```
(defn not-divisible-by [n d]
(when-not (= 0 (rem n d))
n))
(defn sieve-refined [n]
(let [sqrt-n (Math/sqrt n)]
(loop [found-primes [2]
candidates (range 3 (inc n))]
(let [highest-found-prime (last found-primes)]
(if (> highest-found-prime sqrt-n)
(concat found-primes candidates)
(let [remaining
(->> candidates
(map #(not-divisible-by % highest-found-prime))
(remove nil?))]
(recur (conj found-primes (first remaining)) (rest remaining))))))))
```

I then decided to try benchmark my solution against **abhilater**’s, using Criterium. (Sorry for the verbosity here, but I think it could help in solving the mystery.) So, comparing the two solutions for primes up to 10,000, starting with mine:

```
eratosthenes=> (with-progress-reporting (quick-bench (sieve-refined 10000) :verbose))
Warming up for JIT optimisations 5000000000 ...
compilation occurred before 1447 iterations
Estimating execution count ...
Sampling ...
Final GC...
Checking GC...
Finding outliers ...
Bootstrapping ...
Checking outlier significance
x86_64 Mac OS X 10.14.1 4 cpu(s)
Java HotSpot(TM) 64-Bit Server VM 25.131-b11
Runtime arguments: -Dfile.encoding=UTF-8 -XX:-OmitStackTraceInFastThrow -XX:+TieredCompilation -XX:TieredStopAtLevel=1 -Dclojure.compile.path=/Users/petery/Projects/pftv/target/classes -Dpftv.version=0.1.0-SNAPSHOT -Dclojure.debug=false
Evaluation count : 4866 in 6 samples of 811 calls.
Execution time sample mean : 138.937937 µs
Execution time mean : 138.937937 µs
Execution time sample std-deviation : 13.163908 µs
Execution time std-deviation : 14.087452 µs
Execution time lower quantile : 124.000417 µs ( 2.5%)
Execution time upper quantile : 159.560233 µs (97.5%)
Overhead used : 8.534024 ns
Found 1 outliers in 6 samples (16.6667 %)
low-severe 1 (16.6667 %)
Variance from outliers : 30.5188 % Variance is moderately inflated by outliers
```

”Not too bad!”, was my reaction, based on no knowledge what so ever on the subject. Then I ran the test for the other solution:

```
eratosthenes=> (with-progress-reporting (quick-bench (primes-to-n 10000) :verbose))
Estimating sampling overhead
Warming up for JIT optimisations 10000000000 ...
compilation occurred before 82447 iterations
compilation occurred before 164838 iterations
compilation occurred before 10710886 iterations
compilation occurred before 20762588 iterations
compilation occurred before 21009761 iterations
compilation occurred before 31638200 iterations
compilation occurred before 42184248 iterations
Estimating execution count ...
Sampling ...
Final GC...
Checking GC...
Finding outliers ...
Bootstrapping ...
Checking outlier significance
Warming up for JIT optimisations 5000000000 ...
compilation occurred before 188813 iterations
Estimating execution count ...
Sampling ...
Final GC...
Checking GC...
Finding outliers ...
Bootstrapping ...
Checking outlier significance
x86_64 Mac OS X 10.14.1 4 cpu(s)
Java HotSpot(TM) 64-Bit Server VM 25.131-b11
Runtime arguments: -Dfile.encoding=UTF-8 -XX:-OmitStackTraceInFastThrow -XX:+TieredCompilation -XX:TieredStopAtLevel=1 -Dclojure.compile.path=/Users/petery/Projects/pftv/target/classes -Dpftv.version=0.1.0-SNAPSHOT -Dclojure.debug=false
Evaluation count : 10074924 in 6 samples of 1679154 calls.
Execution time sample mean : 54.026565 ns
Execution time mean : 54.026528 ns
Execution time sample std-deviation : 3.897644 ns
Execution time std-deviation : 3.935399 ns
Execution time lower quantile : 50.770124 ns ( 2.5%)
Execution time upper quantile : 60.040617 ns (97.5%)
Overhead used : 8.534024 ns
```

Wow, so my solution is 2,000 times slower??? Depressing! I wanted to feel this difference in speed so I ran just a regular `(time)`

measured comparison using primes up to 1,000,000 on that speedy solution:

```
eratosthenes=> (time (str "@abhilater's: " (count (primes-to-n 1000000)) " primes"))
"Elapsed time: 15099.709893 msecs"
"@abhilater's: 78498 primes"
```

So the **Criterium** results indicates I would have to wait 8 hours for my solution to produce those 78,498 primes, right? However:

```
eratosthenes=> (time (str "mine: " (count (sieve-refined 1000000)) " primes"))
"Elapsed time: 7845.699169 msecs"
"mine: 78498 primes"
```

Yes, just one run each, here, but it seems pretty consistent when I have run it a few times as well, my solution is about twice as fast in delivering the results.

I have also timed a few runs of primes up to 10,000 in that simple manner and it’s a bit less conclusive, but the two solutions seem pretty equal then, and certainly not that mine is 2,000 worse.

Anyone have an idea about what is going on?