Divided we Stand: Optional

Our recent article “NULL is Not The Billion Dollar Mistake. A Counter-Rant” got us a lot of reads, controversial comments, and a 50/50 upvote / downvote ratio pretty much everywhere a blog post can be posted and voted on. This was expected. Objectively, NULL is just a “special” value that has been implemented in a variety of languages and type systems, and in a variety of ways – including perhaps the set of natural numbers (a.k.a. “zero”, the original null – them Romans sure didn’t like that idea). Or, as Charles Roth has put it adequately in the comments:
Chuckle. Occasionally a mathematics background comes in handy. Now we could argue about whether NULL was “invented” or “discovered”…
Now, Java’s null is a particularly obnoxious implementation of that “special value” for reasons like: Compile-time typing vs. runtime typing

// We can assign null to any reference type
Object s = null;

// Yet, null is of no type at all
if (null instanceof Object)
    throw new Error("Will never happen");

The null literal is even more special

// Nothing can be put in this list, right?
List<?> list = new ArrayList<Void>();

// Yet, null can:
list.add(null);

Methods are present on the null literal

// This compiles, but does it run?
((Object) null).getClass();

Java 8’s Optional

The introduction of Optional might have changed everything. Many functional programmers love it so much because the type clearly communicates the cardinality of an attribute. In a way:

// Cardinality 1:
Type t1;

// Cardinality 0-1:
Optional<Type> t01;

// Cardinality 0..n:
Iterable<Type> tn;

A lot of Java 8’s Optional‘s interesting history has been dug out by Nicolai Parlog on his excellent blog. Be sure to check it out: http://blog.codefx.org/tag/optional In the Java 8 expert groups, Optional wasn’t an easy decision:
[…] There has been a lot of discussion about [Optional] here and there over the years. I think they mainly amount to two technical problems, plus at least one style/usage issue:
  1. Some collections allow null elements, which means that you cannot unambiguously use null in its otherwise only reasonable sense of “there’s nothing there”.
  2. If/when some of these APIs are extended to primitives, there is no value to return in the case of nothing there. The alternative to Optional is to return boxed types, which some people would prefer not to do.
  3. Some people like the idea of using Optional to allow more fluent APIs. As in x = s.findFirst().or(valueIfEmpty) vs if ((x = s.findFirst()) == null) x = valueIfEmpty; Some people are happy to create an object for the sake of being able to do this. Although sometimes less happy when they realize that Optionalism then starts propagating through their designs, leading to Set<Optional<T>>’s and so on.
It’s hard to win here. – Doug Lea
Arguably, the main true reason for the JDK to have introduced Optional is the lack of availability of project valhalla’s specialization in Java 8, which meant that a performant primitive type stream (such as IntStream) needed some new type like OptionalInt to encode absent values as returned from IntStream.findAny(), for instance. For API consistency, such an OptionalInt from the IntStream type must be matched by a “similar” Optional from the Stream type.

Can Optional be introduced late in a platform?

While Doug’s concerns are certainly valid, there are some other, more significant arguments that make me wary of Optional (in Java). While Scala developers embrace their awesome Option type as they have no alternative and hardly ever see any null reference or NullPointerException – except when working with some Java libraries – this is not true for Java developers. We have our legacy collections API, which (ab-)uses null all over the place. Take java.util.Map, for instance. Map.get()‘s Javadoc reads:
Returns the value to which the specified key is mapped, or null if this map contains no mapping for the key. […] If this map permits null values, then a return value of null does not necessarily indicate that the map contains no mapping for the key; it’s also possible that the map explicitly maps the key to null. The containsKey operation may be used to distinguish these two cases.
This is how much of the pre-Java 8 collection API worked, and we’re still using it actively with Java 8, with new APIs such as the Streams API, which makes extensive use of Optional. A contrived (and obviously wrong) example:

Map<Integer, List<Integer>> map =
Stream.of(1, 1, 2, 3, 5, 8)
      .collect(Collectors.groupingBy(n -> n % 5));

IntStream.range(0, 5)
         .mapToObj(map::get)
         .map(List::size)
         .forEach(System.out::println);

Boom, NullPointerException. Can you spot it? The map contains remainders of a modulo-5 operation as keys, and the associated, collected dividends as a value. We then go through all numbers from 0 to 5 (the only possible remainders), extract the list of associated dividends, List::size them… wait. Oh. Map.get may return null. You’re getting used to the fluent style of Java 8’s new APIs, you’re getting used to the functional and monadic programming style where streams and optional behave similarly, and you may be quickly surprised that anything passed to a Stream.map() method can be null. In fact, if APIs were allowed to be retrofitted, then the Map.get method might look like this:

public interface Map<K,V> {
    Optional<V> get(Object key);
}

(it probably still wouldn’t because most maps allow for null values or even keys, which is hard to retrofit) If we had such a retrofitting, the compiler would be complaining that we have to unwrap Optional before calling List::size. We’d fix it and write

IntStream.range(0, 5)
         .mapToObj(map::get)
         .map(l -> l.orElse(Collections.emptyList()))
         .map(List::size)
         .forEach(System.out::println);

Java’s Crux – Backwards compatibility

Backwards compatibility will lead to a mediocre adoption of Optional. Some parts of JDK API make use of it, others use null to encode the absent value. You can never be sure and you always have to remember both possibilities, because you cannot trust a non-Optional type to be truly “@NotNull“. If you prefer using Optional over null in your business logic, that’s fine. But you will have to make very sure to apply this strategy thoroughly. Take the following blog post, for instance, which has gotten lots of upvotes on reddit: Day 4 — Let’s write Null free Java code It inadvertently introduces a new anti-pattern:

public class User {
 
    private final String username;
    private Optional<String> fullname;
 
    public User(String username) {
        this.username = username;
        this.fullname = Optional.empty();
    }
 
    public String getUsername() {
        return username;
    }
 
    public Optional<String> getFullname() {
        return fullname;
    }

    //      good--------^^^
    // vvvv--------bad
 
    public void setFullname(String fullname) {
        this.fullname = Optional.of(fullname);
    }
}

The domain object establishes an “Optional opt-in” contract, without opting out of null entirely. While getFullname() forces API consumers to reason about the possible absence of a full name, setFullname() doesn’t accept such an Optional argument type, but a nullable one. What was meant as a clever convenience will result only in confusion at the consumer site. The anti-pattern is repeated by Steven Colebourne (who brought us Joda Time and JSR-310) on his blog, calling this a “pragmatic” approach:

public class Address {
    private final String addressLine;  // never null
    private final String city;         // never null
    private final String postcode;     // optional, thus may be null

    // constructor ensures non-null fields really are non-null
    // optional field can just be stored directly, as null means optional
    public Address(String addressLine, String city, String postcode) {
      this.addressLine = Preconditions.chckNotNull(addressLine);
      this.city = Preconditions.chckNotNull(city);
      this.postcode = postcode;
    }

    // normal getters
    public String getAddressLine() { return addressLine; }
    public String getCity() { return city; }

    // special getter for optional field
    public Optional getPostcode() {
      return Optional.ofNullable(postcode);
    }

    // return optional instead of null for business logic methods that may not find a result
    public static Optional<Address> findAddress(String userInput) {
      return ... // find the address, returning Optional.empty() if not found
    }
}

See the full article here: http://blog.joda.org/2015/08/java-se-8-optional-pragmatic-approach.html

Choose your poison

We cannot change the JDK. JDK API are a mix of nullable and Optional. But we can change our own business logic. Think carefuly before introducing Optional, as this new optional type – unlike what its name suggests – is an all-or-nothing type. Remember that by introducing Optional into your code-base, you implicitly assume the following:

// Cardinality 1:
Type t1;

// Cardinality 0-1:
Optional<Type> t01;

// Cardinality 0..n:
Iterable<Type> tn;

From there on, your code-base should no longer use the simple non-Optional Type type for 0-1 cardinalities. Ever.

35 thoughts on “Divided we Stand: Optional

  1. I must share a link to my favorite rant: https://groups.google.com/d/msg/project-lombok/ROx9rGRb6lI/EF0lk8F0N10J

    From there on, your code-base should no longer use the simple non-Optional Type type for 0-1 cardinalities. Ever.

    No, as Optional t01 (just like java.lang.Boolean) actually allows three possibilities. It’s not Serializable (and that’s by design) and using it just somewhere is worse than nothing. It all just feels wrong.

    IMHO Optional introduces way more problems that it solves. Unfortunately, Java won’t ever be able to get rid of this mistake. We could have a sane null handling instead, like e.g.,

    List allowing no null members
    List allowing null members
    List maybe allowing null members

    with the rather obvious rules (obvious to those used to List) allowing them to interoperate. Try this with Optional.

    1. Good rant :-)

      The List<Optional<String>> argument and @MayOrMayNotBeNullable argument is a bit far-fetched. I rarely see the point of nesting these cardinality types via generics, and I rarely feel that the actual nullability question of a type in a collection is hard to answer. They’re 99% @NonNull. So, in the almost-non-event of having a List<@MayOrMayNotBeNullable String> this type is probably hidden in the internals anyway and the developer can live with the guilt and shame of adding a separate comment in the Javadoc.

      Also, his tale of “default” values is a bit naive. There hasn’t been any bigger mistake in Oracle database, than making '' IS NULL true, i.e. making '' the “default” for VARCHAR.

      But I’m glad that someone else out there doesn’t think that NullPointerExceptions are that bad!

      1. My claim was mainly that Java has had a nice representation of 0-1 cardinality long before Optional came in. It just needed some syntactic sugar and checks to make it less error-prone. Now Java has two such representations, one ugly and error-prone and one superfluous and coming too late.

        There hasn’t been any bigger mistake in Oracle database, than making ” IS NULL true, i.e. making ” the “default” for VARCHAR.

        But database NULL and Java null are two quite different things. I guess, I could live with Java having "" equal to null (it’d be strange, but practical), but what Oracle did makes no sense to me (forbidding NULL for all VARCHAR columns could work better?).

        1. My claim was mainly that Java has had a nice representation of 0-1 cardinality long before Optional came in. It just needed some syntactic sugar and checks to make it less error-prone.

          Yes, there are plenty of ways how that oculd be improved. You probably refer to things like the Elvis operator?

          But database NULL and Java null are two quite different things. I guess, I could live with Java having “” equal to null (it’d be strange, but practical)

          Yes, they’re different things, but in a way, also similar. Both are used to model a variety of states, such as:

          • The known absence of a value
          • The unknown presence of a value
          • The not-yet-availability of a value
          • etc.

          In other words, while their implementation is different – especially when used in the language, their semantics (as expressed by users) is often the same.

          but what Oracle did makes no sense to me (forbidding NULL for all VARCHAR columns could work better?).

          So, you could live with a Java where null == "abc".substring(3), but not with a SQL implementation where substring('abc', 4) IS NULL?

          It would be funny to read all the rants about methods like substring. First you have to check the bounds to avoid StringIndexOutOfBoundsException, then you have to check either the bounds or the result to avoid NullPointerException :-)

  2. >> // Cardinality 1:
    >> Type t1;

    t1’s cardinality is not 1. It is still 0 or 1. But it is not obvious/explicit. Optional makes this explicitly obvious: hey, this variable might or might nor have a value.

    1. The point of this article is precisely the confusion that you underline with your comment. In your version of the story, both t1 and t2 are of cardinality 0 - 1, which they effectively are on the JVM, of course. But the type system that we’re using to model our idea of cardinality tries to claim something else. Namely the fact that the absence of an Optional wrapper means “non-optional”, or “non-null”, which ultimately is never 100% the case because of our legacy.

      So in other words, yes, sadly, you’re right.

    1. At its current state, if you add ~5-10 more CPUs to your machine, you might just finish compiling your project before the next coffee.

      On a different note, I’ve met Mike Ernst from the JSR-308 EG. He’s really disappointed that this stupid null debate is the most important reason why people are referring to the checker framework, which has been built to validate much more complex type systems :)

      So, no, I don’t agree with your claim that you’re not introducing another object and new API. You’re introducing an entire parallel universe (pun, for you!) type system. That’s a very important decision and you should be very careful before making that choice, moving much of your logic into annotations. Of course, this seems to be the current fad in enterprise Java anyway (https://blog.jooq.org/2015/04/15/how-jpa-2-1-has-become-the-new-ejb-2-0/), so from some people’s perspective, coding in annotations might not be that wrong…

      On a side-note: Do you have any actual experience with JSR-308 and the checker framework? I’m very curious about real-world experience. It would be an extremely interesting enhancement for jOOQ – think formal SQL validation like:

      • Issue warnings on “plain SQL” API access (for security reasons)
      • Issue errors on Oracle-only API access when using MySQL
      1. moving much of your logic into annotations

        I don’t think you’re moving any logic to annotations (not that I see a problem with annotations, but that’s a different story), it’s just an optional, compile-time, erased type system. Once it’s verified, it works.

        Do you have any actual experience with JSR-308 and the checker framework?

        Not really. I played with a few checkers (I think nullability, physical units and maybe one more) a year ago just to see that it works (on toy programs) and haven’t tried it again because usability is still annoying (no simple, automatic META-INF annotation processors, no automatic integration with IDEs)

        1. I don’t think you’re moving any logic to annotations

          Well, you’re staying within the nullability use-case of the checker framework, but again, it wasn’t designed for this trivial case…

          not that I see a problem with annotations, but that’s a different story

          Yes, that story is here: http://www.annotatiomania.com

          Annotations are good for what they’re designed for. Custom keywords / modifiers on syntax elements. They’re mediocre for configuration (cause unmodifiable) and they’re terrible for declarative programming (e.g. JPA). What’s your take? We have time for other stories :-)

  3. @”Java’s null is a particularly obnoxious implementation”
    All strangeness goes away when you view it this way: null is of every type, but it’s not an instance of any type.

  4. The million dollar error of Java is not null, every data type has a range of values, String s is not a String, is an address, of course wrapped and closed, but in practice you can think a reference as a smart memory address, and a memory address must have a 0 value.

    The million dollar error of Java is “everything is a reference”, this forces to think on null in contexts where null has no sense.

    Fortunately a future Java version will have ” value objects” like in C++, something like:

    String s("hello") 
    

    Where s cannot be null.
    Note: I’m not sure whether it is going to be exactly this way, I was talking by Twitter with Doug Lea and as far I remember he said something similar is possible, in fact I think it was announced for Java 9.

    In my opinion a better fix than Optional is using @NonNull or @Final, these annotations should be propagated and rule enforced by the compiler, for instance:

    @NonNull String s = somethingnotnull;
    

    for instance:

    public class Person {
      private @NonNull String name;
    
      public Person(@NonNull String name) {
        this.name = name;
      }
    
      public @NonNull getName() {
        return name;
      }
    }
    
    @NonNull String name = me.getName();
    name = othernotnullstring;
    

    Are valid

    String name = me.getName(); 
    

    is a compilation error.

    @Final String s = 
    

    is similar but the reference cannot change its value, including null perhaps.

    Optional has some interesting use in collections, but it can make extremely verbose Java.

    1. Note: I’m not sure whether it is going to be exactly this way, I was talking by Twitter with Doug Lea and as far I remember he said something similar is possible, in fact I think it was announced for Java 9.

      You’re probably referring to value types (and generic specialisation) as is planned for Java 10 in project Valhalla, now. There’s a good overview over the status quo of this project, here:
      http://blog.codefx.org/java/dev/the-road-to-valhalla

      Annotation-based type system enhancements are an interesting idea, of course. JSR-308 and the checker framework (http://types.cs.washington.edu/checker-framework) are all about that. I personally doubt that this is a good idea for language enhancements, though.

      1. Thanks for Valhalla info.

        I recognize I like JSR-308, the main reason I love and code in Java is the static nature of the language, annotations like @Override have improved a lot my code, now I have not fear when making deep refactoring in spite of I use the security of IDEs.

        New limiting annotations may provide syntax expressiveness instead of explaining these limits in comments.

        My “dream” is to be able to program the behavior of these annotations using a sort of meta-programming of the compiler, that is, I would like to define what behaviour has my own @NotNull . Nothing prevents of already built-in annotations out of the box.

        I prefer annotations
        @NotNull Person getPerson()

        instead of

        Optional getPerson()

        stuff.

        Link about JSR-308: http://www.oracle.com/technetwork/articles/java/ma14-architect-annotations-2177655.html

        1. Egh. Didn’t someone prove that C++ templates were turing complete, also? Good luck! ;)

          I know JSR-308. I’ve met Mike Ernst in person. We’ve discussed the possibility of using the checker framework to validate whether jOOQ SQL that is supposed to run on MySQL will really run on MySQL. Or to forbid access to (annotated) “plain SQL” API.

  5. It’s perhaps a bit late to chip in, but here’s a link to Frederik Vraalsen javaBin talk this year in Oslo: http://www.slideshare.net/fredriv/java-8-dos-and-donts-javabin-oslo-may-2015

    On slide 42 he’s clearly against using Optional as method input parameters, and I wholeheartedly side with that decision. In my view, that also implies not using it as a bean field type, so as to save the bean conventions.

    However, I (and Frederik Vraalsen and the JDK designers, obviously), still think it is useful as an intermediate type. And not just to solve the NPE problem (if it is a problem), but because it makes interacting with streams just so much easier and I just like the fluent API.

    The main drawback to Optional in Java 8 is, in my view, that it does not share an interface with Stream, say “Monad”. That is alluded to in your article, but may bear some elaboration. In fact, I have written my own bridge classes to the JDK (I have called them Opt and MonadicArrayList) to achieve just that, and my, what nice things you cannot do with such a library. See for example here: http://www.quora.com/In-simple-terms-what-is-a-sequence-monad

    I quite agree with Stephen Colebourne’s point that if all you ever call on Optional is get() (or isPresent() and its relatives), you’ve missed the whole point of the class.

    — Sebastian

    1. It’s never late for the classics :)

      I absolutely agree that it is a useful type when operating with streams (or any functional API for that matter, or any API that abstracts over reference and primitive types).

      You might be interested in checking out javaslang, which has a common Monad supertype to a variety of subtypes.

      1. Thanks for the pointer to javaslang (Frederik Vraalsen mentioned it, too, towards the end of his talk, but until yesterday I didn’t know of it.) It certainly is an impressive virtuoso piece, all done with mirrors, eh, Iterables. With regard to Monads, I think it’s inspired by the same basic approach to higher-order generic types as Daniel Gronau’s port of Haskell to Java (https://github.com/svn2github/highj/). With regard to persistent collections, I have previously used the TotallyLazy Framework (http://totallylazy.com/). In contrast to javaslang, persistent data structures in TotallyLazy implement the read only portion of the matching standard collection interface, so PersistentMap extends java.util.Map, PersistentList extends java.util.List etc. I like that. It would perhaps be interesting to benchmark TotallyLazy against javaslang.

        BTW: Downloads from the javaslang site didn’t work for me, neither with Chrome nor Firefox. I had to go directly to GitHub (https://github.com/javaslang)

        — Sebastian

        1. Hmm, interesting. I wasn’t aware of TotallyLazy. Thanks for pointing that out. I believe that extending the Java collections is extremely limiting. A fresh start can certainly help building a better API. On the other hand, perhaps, interoperability was a useful feature for TotallyLazy.

          I’ve asked Daniel Dietrich (the Javaslang author) to chime in…

        2. It is as simple like this: don’t look back, look forward.

          If we want to create something more valuable, it will be different to that what is already there. If it were not different, there would be no need to create it, because it would be already there.

          There is only one thing to mention which stands in the way, one key requirement: The will to change. Change is the key to survival and evolution.

          To our question, in a sentence: The existing Java collection interfaces are dinosaurs. They cannot be adapted to the new development conditions. I will expain that in the following.

          Our synapses, and therefore the way we think, are aligned to that what we do every day, on and on. Imperative programming in a language like Java is bound to shared state. Object oriented programming helps us to maintain the state by encapsulating it. To leverage objects in functional programming (FP), we need to see objects as immutable values, once created they will never change.

          The Java collection interfaces where designed with mutability in mind. That was the hype during late 90s, having objects with encapsulated data and accessors of different visibility, inheritance and so on.

          In functional programming we have lambdas to describe behavior in a very concise way. This behavior is injected into sub-routines (read: functions) of our application. The boilerplate is internalized in these sub-routines. This is the way we abstract things, like with a DSL.

          Given a function, we expect it to return the same value on each subsequent call provided the same arguments. This is not called immutability, it is called referential transparency. But immutability is also a key aspect of FP and plays well together with referential transparency I think.

          Coming back to Java’s collection library, we are not able to model immutable collections based on the existing interfaces. When we talk about immutability, we do not mean Guava-like immutability. ‘Real’ immutability allows us to call write operations like set or remove, with the difference that a new instance is returned.

          Instead of copying collections on write operations we use sophisticated algorithms in FP to share as much data of the original collection as possible. A collection API does only need one thing to support this, a new instance of the collection needs to be returned on write operations. The Java collection API does not return a value of the same collection type on write operations.

          This is the main reason why Java’s collection interfaces cannot be used to design an immutable collection library. There are other reasons already mentioned in this post, for example the semantics of Map.get(). But this goes beyond the original question. Let me just say that you may want to have algebraic data types at hand when dealing with immutable collections. *cough* Option *cough*

          PS: Sebastian, thanks for the hint with the downloads, I will take a look…

          1. Our synapses, and therefore the way we think, are aligned to that what we do every day, on and on

            Your synapses are also still wired around the concept of identity, which leads to you having collection types like Set, Map, based on java.lang.Object identity. I think that identity will be viewed completely differently (finally!) once Java 10 / Valhalla is out.

            What’s your take on that?

          2. Your synapses are also still wired around the concept of identity, which leads to you having collection types like Set, Map, based on java.lang.Object identity. I think that identity will be viewed completely differently (finally!) once Java 10 / Valhalla is out.
            What’s your take on that?

            Well, identity is an interesting topic (in general).

            As OO/Java developer of cause we think of object identity in terms of Object.equals. Javaslang also defines a method eq (currently located in javaslang.Iterable, may move before 2.0.0) which is a smoothing replacement for equals by comparing the congruence of structures and the equality of contained values. It is like Scala’s == on steroids and also pretty much like the understanding of equality of value objects in project valhalla.

            One of the more interesing value objects (if so?) is Function: two partial, referential transparent functions f1, f2 : X -> Y are identical (f1 = f2) if for all x in X either f1(x) and f2(x) are undefined or there exists y in Y with f1(x) = y = f2(x). Given an infinite amount of memory, this problem is not machine-computable in finite time and therefore not practical, even given finite memory.

            I like value objects as described in project valhalla. Javaslang for Java 10+ will support them :)

            – Daniel

            1. Yes, those thoughts are the ones I’ve had (although I’ve never thought about functions this way. Very nice). But what about sets? I personally think that there is absolutely no point in putting a non-value object into a set (or map as a key). With that in mind, is a set really worth being a first-class citizen? Or shouldn’t features like ordering, distinctness, etc. be things that can be attached to any traversable / iterable – as long as they contain value objects?

          3. Lukas, I see your synapses are aligned to SQL tables :-) But yes, one attributed collection type would be sufficient. The distinct property could be generalized by introducing a filter-chain abstraction. We would have a Bag, a group of data items. Additional properties, which might be coupled in some way with a bag, may provide a specific view on that data and may also control the way the data is changed (, i.e. only put an item, if not present already).

            I think tables (= relations) and views are a great abstraction. Views in a programming language would need write operations and underlying constraints, i.e. preserving distinct property etc.

            1. Lukas, I see your synapses are aligned to SQL tables

              Yes! In SQL, there are only values. There is no identity, although identity can be emulated using constraints, and some implementations provide identity under the covers for implementation-specific details.

              Datomic is a very nice approach at modelling all data as (immutable) relations: http://www.datomic.com. I’m sure you’d like that as well…

          4. Nice, will check that. (Busy now reviewing a chapter of a book about reactive programming in my lunch break. I’m on page 5 of 35 … 8-} )

      2. Btw – I removed the Monad interface, Lukas mentioned, in Javaslang 2.0.0. In fact, I removed the whole javaslang.algebra package.

        These algebraic types reflected properties of types like collections (List, Set, …) or controls (Try, Option, …). I realized that it is not possible to create a common interface for all these types. Also the semantics might not be unique, like for Monoid (e.g. +, *, …). And third, some people argue that it is better to deal with Applicatives than with Monads. I want to give developers the freedom to ‘lift’ their code into an algebraic layer and operate there in an abstract way rather than mixing the standard API with purely functional operations. In my opinion algebraic expressions deserve their own layer of abstraction. For example Scalaz does this for Scala. The algebraic Javaslang module first has to be investigated and will be created some time after the 2.0.0 release (if it makes sense and provides developers with real value rather than being just a decoration – we will see).

    2. Hi Sebastian, I’m really ashamed of my stupid answer. Found your blog and I think that I can learn much from you. Are you on twitter? Can’t find you there…

      The download links should work now, feedback welcome.

      – Daniel

      1. Hi Daniel, the links now work for me, thanks for fixing them. And no, I’m not on Twitter or Facebook etc. Laziness mixed with distrust, I’m afraid.
        — Sebastian

        1. Hi Sebastian, you are absolutely right. Media, especially social, is a big mass manipulation machine and a time sucker. I started curing myself by throwing away my tv, over 7 years ago. The next big thing will be a life without mobile phone and computer :-))
          – Daniel

Leave a Reply