from the needs-more-XML dept.
OpenBSD developer, Gilles Chehade, debunks multiple myths regarding deployment of e-mail services. While it is some work to deploy and operate a mail service, it is not as hard as the large corporations would like people to believe. Gilles derives his knowledge from having built and worked with both proprietary and free and open source mail systems. He covers why it is feasible to consider running one.
I work on an opensource SMTP server. I build both opensource and proprietary solutions related to mail. I will likely open a commercial mail service next year.
In this article, I will voluntarily use the term mail because it is vague enough to encompass protocols and software. This is not a very technical article and I don't want to dive into protocols, I want people who have never worked with mail to understand all of it.
I will also not explain how I achieve the tasks I describe as easy. I want this article to be about the "mail is hard" myth, disregarding what technical solution you use to implement it. I want people who read this to go read about Postfix, Notqmail, Exim and OpenSMTPD, and not go directly to OpenSMTPD because I provided examples.
I will write a follow-up article, this time focusing on how I do things with OpenSMTPD. If people write similar articles for other solutions, please forward them to me and I'll link some of them. it will be updated as time passes by to reflect changes in the ecosystem, come back and check again over time.
Finally, the name Big Mailer Corps represents the major e-mail providers. I'm not targeting a specific one, you can basically replace Big Mailer Corps anywhere in this text with the name of any provider that holds several hundred of millions of recipient addresses. Keep in mind that some Big Mailer Corps allow hosting under your own domain name, so when I mention the e-mail address space, if you own a domain but it is hosted by a Big Mailer Corp, your domain and all e-mail addresses below your domain are part of their address space.
Earlier on SN:
Protocols, Not Platforms: A Technological Approach to Free Speech (2019)
Re-decentralizing the World-Wide Web (2019)
Usenet, Authentication, and Engineering - We Can Learn from the Past (2018)
A Decentralized Web Would Give Power Back to the People Online (2016)
Decentralized Sharing (2014)
Anonymous Coward writes:
""MediaGoblin is a free software media publishing platform that anyone can install and run. Decentralization, (...) is the main goal of the project, one that is backed and connected to the GNU project.
So far, MediaGoblin has raised only $3,000 of its $60,000 goal, with the campaign set to end April 14th, (...) that is a date that is soon approaching. The first crowd-sourcing initiative was in October of 2012, so this is not the first crowd-funding initiative the project has launched. This second campaign was clearly spurred on by the PRISM revelations of recent past. Having not noticed any failures to meet 2012's funding campaign, it's very possible the team may reach their goal again, given the intensity of the subject matter."
The original purpose of the web and internet, if you recall, was to build a common neutral network which everyone can participate in equally for the betterment of humanity. Fortunately, there is an emerging movement to bring the web back to this vision and it even involves some of the key figures from the birth of the web. It's called the Decentralised Web or Web 3.0, and it describes an emerging trend to build services on the internet which do not depend on any single "central" organisation to function.
So what happened to the initial dream of the web? Much of the altruism faded during the first dot-com bubble, as people realised that an easy way to create value on top of this neutral fabric was to build centralised services which gather, trap and monetise information.
[...] There are three fundamental areas that the Decentralised Web necessarily champions: privacy, data portability and security.
Privacy: Decentralisation forces an increased focus on data privacy. Data is distributed across the network and end-to-end encryption technologies are critical for ensuring that only authorized users can read and write. Access to the data itself is entirely controlled algorithmically by the network as opposed to more centralized networks where typically the owner of that network has full access to data, facilitating customer profiling and ad targeting.
Data Portability: In a decentralized environment, users own their data and choose with whom they share this data. Moreover they retain control of it when they leave a given service provider (assuming the service even has the concept of service providers). This is important. If I want to move from General Motors to BMW today, why should I not be able to take my driving records with me? The same applies to chat platform history or health records.
Security: Finally, we live in a world of increased security threats. In a centralized environment, the bigger the silo, the bigger the honeypot is to attract bad actors. Decentralized environments are safer by their general nature against being hacked, infiltrated, acquired, bankrupted or otherwise compromised as they have been built to exist under public scrutiny from the outset.
In the Web 3.0 I want a markup tag that delivers a nasty shock to cyber-spies...
Professor Steve Bellovin at the computer science department at Columbia University in New York City writes in his blog about early design decisions for Usenet. In particular he addresses authentication and the factors taken into consideration given the technology available at the time. After considering the infeasiblity of many options at the time, they ultimately threw up their hands.
That left us with no good choices. The infrastructure for a cryptographic solution was lacking. The uux command rendered illusory any attempts at security via the Usenet programs themselves. We chose to do nothing. That is, we did not implement fake security that would give people the illusion of protection but not the reality.
For those unfamiliar with it, Usenet is a text-based, worldwide, decentralized, distributed discussion system. Basically it can be likened to a bulletin board system of sorts. Servers operate peer to peer while users connect to their preferred server using a regular client-server model. It was a key source of work-related discussion, as well as entertainment and regular news. Being uncensorable, it was a key source of news during several major political crises around the world during the 1980s and early 1990s. Being uncensorable, it has gained the ire of both large businesses and powerful politicians. It used to be an integral part of any ISP's offerings even 15 years ago. Lack of authentication has been both a strength and a weakness. Professor Bellovin sheds some light on how it came to be like that.
Despite weaknesses, Usenet gave rise to among many other things the now defunct Clarinet news, which is regarded to be the first exclusively online business.
Researcher Ruben Verborgh explains how to re-decentralize the World-Wide Web, for good this time. He argues that decentralization is foremost about choice and thus people should be free to join large or small communities and talks up Solid as a primary option.
Originally designed as a decentralized network, the Web has undergone a significant centralization in recent years. In order to regain freedom and control over the digital aspects of our lives, we should understand how we arrived at this point and how we can get back on track. This chapter explains the history of decentralization in a Web context, and details Tim Berners-Lee’s role in the continued battle for a free and open Web. The challenges and solutions are not purely technical in nature, but rather fit into a larger socio-economic puzzle, to which all of us are invited to contribute. Let us take back the Web for good, and leverage its full potential as envisioned by its creator.
Earlier on SN:
Tim Berners-Lee Launches Inrupt, Aims to Create a Decentralized Web (2018)
Decentralized Sharing (2014)
Mike Masnick, usually editor for Techdirt, has written an essay on a technological approach to preserving free speech online in spite of the direction things have been heading in regards to locked-in platforms. He proposes moving back to an Internet where protocols dominate.
This article proposes an entirely different approach—one that might seem counterintuitive but might actually provide for a workable plan that enables more free speech, while minimizing the impact of trolling, hateful speech, and large-scale disinformation efforts. As a bonus, it also might help the users of these platforms regain control of their privacy. And to top it all off, it could even provide an entirely new revenue stream for these platforms.
That approach: build protocols, not platforms.
To be clear, this is an approach that would bring us back to the way the internet used to be. The early internet involved many different protocols—instructions and standards that anyone could then use to build a compatible interface. Email used SMTP (Simple Mail Transfer Protocol). Chat was done over IRC (Internet Relay Chat). Usenet served as a distributed discussion system using NNTP (Network News Transfer Protocol). The World Wide Web itself was its own protocol: HyperText Transfer Protocol, or HTTP.
In the past few decades, however, rather than building new protocols, the internet has grown up around controlled platforms that are privately owned. These can function in ways that appear similar to the earlier protocols, but they are controlled by a single entity. This has happened for a variety of reasons. Obviously, a single entity controlling a platform can then profit off of it. In addition, having a single entity can often mean that new features, upgrades, bug fixes, and the like can be rolled out much more quickly, in ways that would increase the user base.