Hi all,

 

The LIBER Working Group is happy to share with you an update of our latest talks with Elsevier.

 

Comments are welcome!

 

As a next step we would like to create a blogpost of this on the LIBER website.

 

All the best,

Jos

---

 

Conclusions of talks Elsevier and FIM4L

 

In the beginning of 2022 FIM4L and Elsevier has held a series of talks about federated access. (3/8/2022, 3/31/2022 and 5/9/2022). This update is intended to provide the FIM4L community with the topics we’ve been discussing and identify some of the challenges and opportunities ahead.

The concept of 'agile' federated access

 

In these talks FIM4L presented the technical possibilities about releasing a variety of attributes during the login process. The concept is called "agile" and is based on the CAR system from Duke University. CAR implements informed Consent Attribute Release. With this in place a user is able to approve or deny the release of every attribute at the login phase.

 

The possibility whether or not to release a persistent identifier, also known as a pseudonymous identifier, implies great consequences. This shaped the main theme of the talks.

 

This concept provided the choice for a user to remain anonymous, pseudonymous or personally identified during the federated login process and hence at the publisher's platform.

 

A live showcase of this concept was shown by Rob Carter from Duke University [1] by using the CAR system to block parsing the pseudonymous identifier to the Elsevier Sciencedirect platform. The results were technically all fine. Everything worked as expected.

Anonymous login and user experience

 

A major topic has been considering different user journeys, as users can arrive via different paths and with different identities, but expect a unified experience. Several problems across the Elsevier platforms arise when users login without a pseudonymous identifier, meaning anonymous.

 

We regarded Elsevier as a complex publisher with many features and possibilities in place. It has different services, like ScienceDirect and Scopus, but also Mendeley as a public service for which personal login is required. Other on-line content providers typically offer a simpler platform for managing privacy. In this regard, Elsevier offers an excellent opportunity to investigate interactions between identity and anonymity.

 

Points of considerations for anonymous login:



 

If anonymous login would be officially supported by a publisher, then it's important to inform a user using very clear communication. This is hard for two reasons: Users don't understand these login differences and there always remain cases for a users' journey on a platform where a user could not be informed at all.

 

Building trust

 

Given these and many other difficulties of anonymous login, a basic reason should be very clear given why anonymous login is needed at all. First answer is because of the library's principle that it must be possible to conduct research anonymously. But most users are fine with a good trust relationship between library and publisher. Only for a few most critical users the question of anonymity will remain. And that question will remain anyway. And for a good reason, because anonymity on the internet is even impossible.

 

If a pseudonymous identifier for the user is far more convenient and technically preferable, is there a way to create a trust relationship and options for the user to be explicitly anonymous for the publisher if the user requires that?

 

Given the talks, this question seems to lay down the most viable path to go.

 

Some considerations for pseudonymous login:

 

There should be a common trust relationship between library and publisher. And users. This can be achieved by:

 

The library should advocate anonymity to the publisher and this should not stop by SAML or other authentication methods. The publisher has more data and therefore more responsibility than the library. Even when a user comes anonymous from the library, in the publisher's system things could change.

 

It should be beneficial to have a code of conduct besides the technical solution. A general template data processing agreement (DPA) which libraries can use for publishers seems to be a good idea. Technical transparency could support a trust relationship.

 

Besides a library-publisher relationship the library should provide more guidelines for its users when she wants her users to have an anonymous (as much as possible) research journey. Libraries have had a major role, historically, in educating users on issues of privacy  and this is another such opportunity. Browser related recommendations could be part of it.

 

A remaining question seems to be the following. A publisher has many mechanisms to track users for good reasons. Is it possible for an authenticated user to opt-out from a personalized session and to get into an anonymous session? If a publisher can provide this, it would align with the libraries' principle that a user should have the ability to conduct research anonymously.

 

We also want to note another consideration in offering a variety of access choices - the opportunity to educate users on the tradeoffs between attribute release and access models. In addition, these choices provide a chance to illustrate the virtues of transparency in user interactions with content providers; the user gets to clearly see what is being shared and why.

 

______Endnotes_______

 

[1] This was recorded during the meeting. The video can be watched here. (3/8/2022)