Showing posts with label science. Show all posts
Showing posts with label science. Show all posts

Saturday, 3 May 2025

The Unspoken Cost of Knowledge: How Academic Publishing Profits from Free Labor

When most people think of publishing, they imagine a writer being paid for their work, an editor polishing it, and a reader buying the final product. This transactional loop, however imperfect, makes a certain kind of economic sense.

Academic publishing, however, doesn’t follow this pattern. In fact, it inverts it.

In academic publishing, it is not the publishers that pay those who work to generate the content, it is the other way round. Then there are also the reviewers, experts in their particular fields, who are called upon to assess and review the content submitted by the authors before publication, a process that can take months with a lot of back and forth. They also work pro bono. Meanwhile, the publishers profit by charging either the readers through subscriptions or, more recently, the authors themselves through article processing charges (APCs). The result is a system where the people who create and validate knowledge are volunteering their time, while the organizations that distribute it turn a handsome profit.

This would be strange enough if it were a quirky side effect of bureaucracy. But it’s not. It’s the system. And it’s wildly profitable.

The Economics of an Asymmetry

Producing a peer-reviewed paper is not a trivial affair. Researchers spend months, sometimes years, gathering and analyzing data, refining arguments, and engaging with an ever-growing thicket of academic literature. When they’re finally ready to publish, they often face hefty charges just to make their work publicly available. Article processing charges commonly range from $1,600 to $4,000, with high-prestige journals charging much more. Nature, for instance, now charges over $12,000 for its Gold Open Access option.


Even when authors manage to publish without paying upfront, usually in subscription-based journals, universities and libraries still foot enormous bills to access that content. This is what makes the economic structure so peculiar: the costs are real, but they are rarely borne by the publishers. Independent studies estimate that the actual cost of processing and publishing a paper lies somewhere between $200 and $700. The rest? Pure profit.


And profit they do. Elsevier reported a 38% operating margin in 2023, a number that surpasses even tech giants like Apple or Alphabet. Springer Nature and Taylor & Francis reported margins in the 28% range and 35% range respectively. These are not the returns of a struggling industry. They are the hallmarks of a rentier model built on monopolizing access to knowledge.



To add insult to injury, authors often surrender copyright to their own work. That means they can’t freely share, reuse, or even publicly post their own findings, at least not in the final, published form, without explicit permission from the publisher. The content is no longer theirs. It belongs to the distributor.

Open Access: A Solution That Isn't

But what about Open Access journals? At first glance, Open Access seems like the fix we’ve all been waiting for: make all research freely available to everyone, no subscriptions, no gatekeeping. Knowledge for the many, not the few. And indeed, perhaps that was the intention, to remove paywalls and democratize access to scientific findings. But Open Access has not solved the problem. It has simply moved it.

Instead of charging readers or libraries to access research, publishers now charge authors to publish it. These are the aforementioned Article Processing Charges, often thousands of dollars per article, paid by researchers, their institutions, or more often, indirectly by the public through government-funded grants. So the cost hasn’t disappeared. It’s just been shifted upstream. The reader no longer pays, the writer does. And the writer’s wallet is frequently taxpayer-funded.

Meanwhile, the underlying business model remains intact. Publishers are still extracting massive profits, just from a different node in the chain. Worse, this system often introduces new barriers: now, if you want to publish in a prestigious journal, you don’t just need good ideas. You need funding. Researchers at underfunded institutions, in the Global South, or in fields with limited grant support, are once again priced out, not from reading the science, but from contributing to it.

The paywall hasn’t been demolished. It’s just been relocated.

The Oubliette

The academic publishing system persists because of an unspoken compromise between compliance and necessity. Academics know it’s exploitative, but early-career researchers are under immense pressure to publish in prestigious journals. For academics, especially early on in their careers, tenure, funding, credibility, it all hinges on where you publish, not just what you publish. 

So even when researchers are aware of the flaws, they cannot afford to not participate. It’s not apathy. It’s survival. Change, under these conditions, is not just difficult. It can be professionally dangerous.

On Academic Duty

Defenders of the system sometimes argue that writing and peer review are part of the  academic job, already paid for by salaries, grants, or public funding. Why not then consider peer review as a kind of civic duty? The problem is that this narrative overlooks the broader structure. First, academic salaries are modest, often significantly lower than those offered to individuals with comparable skills in industry. Most academics work on precarious contracts, with limited institutional support. Second, publicly funded research should be accessible to the public, not hidden behind paywalls that restrict its reach and impact. Third, tax money intended to support the advancement of science should not be diverted to enrich private publishing companies, but should instead be reinvested into further research and education. Fourth, the time and effort dedicated to peer reviewing and revising articles comes at the expense of teaching, mentoring, and conducting new research, core activities that universities and the public explicitly value. 


Peer review is essential labor. Without it, the entire edifice of scholarly communication collapses. And yet it is invisible in tenure files, promotion cases, and annual evaluations. It is uncredited, unremunerated, and often thankless.


There are broader implications, too. High APCs and subscription costs deepen the divide between wealthy institutions and the rest of the world. Researchers in lower-income countries, or even underfunded departments, simply can’t afford to participate in this system, either as authors or readers. The result is a kind of epistemic inequality: knowledge flows downhill, access is tiered, and entire regions are locked out of the global scientific conversation.

Reform and Solutions

Given these inequities, it is clear that reform is needed. One promising model is exemplified by platforms like the Open Journal of Astrophysics, which uses the arXiv preprint server as its submission system. Here, articles are submitted openly and peer reviewers are assigned to evaluate the work transparently. This system minimizes costs, maximizes accessibility, and keeps ownership with the researchers. This model could be expanded. Universities and consortia could collaborate to host decentralized, nonprofit journals. These would restore control to the scholarly community and reduce dependence on commercial platforms.


Alternatively, the publishing industry could introduce a model where both authors and reviewers are remunerated for their contributions. Reviewers could receive base compensation for their evaluations, with opportunities to earn higher rates for thorough, well-argued, and respectfully conducted reviews. Outstanding reviewers could be recognized through formal credentials and higher pay rates, encouraging a culture of constructive and diligent peer review. Authors could also be provided with modest stipends to offset the hidden costs of article preparation and revisions. Such a system would not only reward academic labor fairly but also improve the overall quality, rigor, and civility of scholarly communication.

Such reforms wouldn’t just create fairer conditions. They’d likely improve the quality, speed, and rigor of scholarly communication as a whole.

Reclaiming the Commons

Researchers themselves also have a role to play in driving change. Before achieving tenure, they can carefully balance the need to publish in high-impact journals with efforts to also submit work to reputable, low-cost open-access venues whenever possible. They can advocate for transparency in peer review and promote discussions about reform within their institutions. After achieving tenure, researchers are in a stronger position to challenge the status quo more openly: by prioritizing low-cost open-access journals, participating in or founding new publishing initiatives, mentoring young researchers about their publishing choices, and pressuring academic societies and funding agencies to support open science principles.


And for those outside academia: this affects you too. As taxpayers, your tax money funds most of this research. You have a right to access it. You have a right to ask why the products of public investment are being locked away for private profit. You can support open-access legislation, pressure politicians to pressure funding agencies and universities to rethink publishing priorities, and back projects that are trying to do things differently. Journalists, educators, and policymakers can help raise awareness of the inequities in academic publishing. Ultimately, by demanding that publicly funded research remain publicly accessible, we can all help shift the system toward one that serves science and society, rather than private profit.

Knowledge as a public good

Academic publishing has become a system where the creators of new knowledge must pay to give it away, while the public, who funded the creation of this new knowledge, must pay again to read it. The costs are high, the profits concentrated, and the labor largely invisible.

If we believe that knowledge is a public good and not a luxury commodity, then we need to work towards a system that reflects that belief. That means rewarding the work that sustains it, reducing barriers to entry, and making the outputs of scholarship freely available to all.

The current structure is outdated. The incentives are misaligned. But the alternatives are no longer hypothetical. They’re real, viable, and already in motion.

All that remains is the will to choose them.

References:
Cost of publication in Nature https://www.nature.com/nature/for-authors/publishing-options

How publishers profit from article charges: https://direct.mit.edu/qss/article-pdf/4/4/778/2339495/qss_a_00272.pdf

How bad is it and how did we get here? https://www.theguardian.com/science/2017/jun/27/profitable-business-scientific-publishing-bad-for-science

How academic publishers profit from the publish-or-perish culture

https://www.ft.com/content/575f72a8-4eb2-4538-87a8-7652d67d499e


Academic publishers reap huge profits as libraries go broke

https://www.cbc.ca/news/science/academic-publishers-reap-huge-profits-as-libraries-go-broke-1.3111535


The political economy of academic publishing: On the commodification of a public good

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0253226

Elsevier parent company reports 10% rise in profit, to £3.2bn

https://www.researchprofessionalnews.com/rr-news-world-2025-2-elsevier-parent-company-reports-10-rise-in-profit-to-3-2bn


Taylor & Francis revenues up 4.3% in 'strong trading performance'

https://www.thebookseller.com/news/taylor--francis-revenues-up-43-in-strong-trading-performance



Sunday, 27 April 2025

"Το τέλος της Φυσικής;"


Το ερώτημα αν «τελείωσε» η Φυσική έχει τεθεί πολλές φορές στην ιστορία. Ένα χαρακτηριστικό παράδειγμα είναι η δήλωση του Michelson (του γνωστού από το πείραμα Michelson-Morley) το 1894: “The more important fundamental laws and facts of physical science have all been discovered, and these are now so firmly established that the possibility of their ever being supplanted in consequence of new discoveries is exceedingly remote.” Λίγο αργότερα ήρθαν η θεωρία της σχετικότητας και η κβαντομηχανική και αναποδογύρισαν τα τραπέζια.

Τέτοιες δηλώσεις, νομίζω, συνήθως παραβλέπουν κάποια σημαντικά για μένα σημεία:

1) Την ύπαρξη των “αγνώστων αγνώστων”. Δεν ξέρουμε τι δεν ξέρουμε. Οι μεγαλύτερες ανατροπές προήλθαν από φαινόμενα που δεν είχαμε καν φανταστεί.

2) Τη φαντασία ως θεμελιώδες στοιχείο της επιστημονικής προόδου. Η επιστήμη δεν είναι μόνο παρατήρηση και μέτρηση, αλλά και ριζικές αλλαγές στον τρόπο που σκεφτόμαστε. Η θεωρία της σχετικότητας, για παράδειγμα, δεν προέκυψε από έλλειψη δεδομένων, αλλά από μια αλλαγή οπτικής.

3) Την τεχνολογική πρόοδο ως καταλύτη. Στην Αστρονομία, για παράδειγμα, η αντικατάσταση των φωτογραφικών πλακών από τα CCDs επέφερε μια μικρή επανάσταση. Πολλά πειράματα που σήμερα φαίνονται ανέφικτα, ίσως να είναι θέμα ρουτίνας σε λίγες δεκαετίες.

4) Την εξέλιξη της μεθοδολογίας. Η Φυσική δεν είναι βέβαια στατική ούτε απολύτως καθορισμένη. Η παραδοσιακή πειραματική προσέγγιση δεν εγκαταλείπεται, αλλά εμπλουτίζεται και συνεχώς μετασχηματίζεται. Πλέον βασιζόμαστε ολοένα και περισσότερο σε προηγμένα μαθηματικά και υπολογιστικά μοντέλα, υποστηριζόμενα από τεράστιες υπολογιστικές δυνατότητες. Νέοι ερευνητικοί τομείς αναδύονται, ανοίγοντας καινούριους δρόμους.

Ίσως πράγματι να διανύουμε μια περίοδο σχετικής νηνεμίας, όμως η ιστορία της Φυσικής δείχνει πως τέτοιες φάσεις δεν είναι συχνά παρά το προοίμιο απρόσμενων εξελίξεων. Εν πάση περιπτώσει, ένα τέτοιο ενδεχόμενο δεν μπορούμε να το αποκλείσουμε ελαφρά τη καρδία.

Και ακόμη κι αν κάποια όρια, όπως η κλίμακα Planck, φαίνονται θεμελιώδη και απροσπέλαστα, αυτό δεν σημαίνει ότι οι ανακαλύψεις σταματούν εκεί. 

Δεν είναι μόνο το «τι» μπορούμε να μετρήσουμε, αλλά και το πώς το εντάσσουμε σε νέους τρόπους σκέψης. Το ουσιώδες είναι να μπορούμε να διατυπώνουμε υποθέσεις που οδηγούν σε ελέγξιμες προβλέψεις. Όσο αυτό παραμένει εφικτό, δεν νομίζω ότι χρειάζεται να ανησυχούμε ιδιαίτερα για την κατάσταση της Φυσικής.

Saturday, 2 November 2024

Feynman, a complicated legacy

Ethan Siegel is the author of the “Starts with a Bang” newsletter on BigThink, which is a great read and I recommend it to everyone interested in the latest developments in Astronomy, but also in Physics in general. He is also a facebook friend. In his Nov 1st newsletter Ethan addresses a question from one of his readers, which ends with: “It seems to me that you have a somewhat ambivalent relationship with R. Feynman. Is there a deeper reason for this?”. Ethan here gives a very detailed answer acknowledging Feynman’s scientific accomplishments before going on to highlight some of Feynman’s more controversial character aspects, and I certainly agree with his conclusion that “We can rightfully laud [Feynman] for his great accomplishments while still being critical of his unacceptable behaviors, and I would argue we have an obligation to share the full truth about Feynman, both the physicist and the human being, with subsequent generations of scientists and science-literate citizens.” But after reading the whole article, and not finding anything specific I could disagree with, I was left with a feeling that something was missing, and it took me a while to fully conceptualise the origins of my discomfort and put it down to words.


Feynman was born in 1918, was mostly active during the 40s to the 60s, and died in 1988 when he was 69 years old. He was successfully treated for abdominal cancer in 1980, but then the cancer came back with a vengeance in 1988, at which point it had gotten so bad that Feynman refused treatment. In many ways his social views were a product of his time, yet his patterns of behaviour were not predictably consistent. Ethan’s article does a good job of pointing out the negatives, when viewed anachronistically through a modern lens, but in leaving out the aspects of Feynman’s character that made him “curious”, ends up with an incomplete picture of who Feynman really was as a person.


Fundamentally, Feynman was an iconoclast, and it is through this lens that his character contradictions can be reconciled. He was also a prankster with  little regard for authority. While working for the Manhattan project, he famously amused himself by breaking into secure safes containing nuclear secrets—not to undermine the project, but to expose how lax security was. Sometimes he would even leave notes in the safes, like “I borrowed document no. LA4312–Feynman the safecracker.” He loved the arts and was an avid bongo player who also learned to paint for fun (even holding an exhibition under a pseudonym), occasionally having intense discussions about art vs. science with his artist/mentor/friend Jirayr “Jerry” Zorthian (check out the Ode to a Flower monologue). An older contemporary colleague of his from Caltech once told me that Feynman could sometimes be found smoking weed in the Professor’s common room, much to the chagrin of everyone else. He was also a regular at the local strip club, and knew all the girls working there. He would pick up an orange juice from the bar (by that time he wasn’t a big fan of alcohol), together with a bunch of napkins or place mats, and he would watch the show or just sit there and think, scribbling down equations on the napkins, alone or with company. This was the kind of environment he felt more at ease in and he actually did quite a number of his calculations in that place. When the county tried to close the place down on account of “uncovered breasts”, he was the bar’s only regular customer willing to come forward and testify publicly in court in defense of the bar. 


Such stories reveal Feynman as a gadfly—a horsefly, if you will—who delighted in seeing how the social and academic order would reconfigure itself when challenged. He cared little for social norms or accolades and famously eschewed honorary degrees and pomp. His devotion was to truth, inquiry, and the freedom to explore without inhibition.


Ethan’s article rightly discusses the biases prevalent in academia during Feynman’s time (and later) and how he sometimes mirrored those biases. Like most people of his time, it doesn’t seem like Feynman had carefully thought through the harmful implications of maintaining these problematic attitudes. Take, for example, a talk he gave in 1966 at the National Science Teachers Association. The topic he was asked to talk about was “What is Science?”, a title that he didn’t really like. It is a fantastic talk and I strongly encourage everyone to read the transcript, but it is also a product of its time. At some point during this talk Feynman says the following: 


“I listened to a conversation between two girls, and one was explaining that if you want to make a straight line, you see, you go over a certain number to the right for each row you go up–that is, if you go over each time the same amount when you go up a row, you make a straight line–a deep principle of analytic geometry! It went on. I was rather amazed. I didn’t realize the female mind was capable of understanding analytic geometry. She went on and said, “Suppose you have another line coming in from the other side, and you want to figure out where they are going to intersect.  Suppose on one line you go over two to the right for every one you go up, and the other line goes over three to the right for every one that it goes up, and they start twenty steps apart,” etc.–I was flabbergasted.  She figured out where the intersection was. It turned out that one girl was explaining to the other how to knit argyle socks.“ 


This passage clearly comes across as sexist, reflecting the prevalent attitudes of that time. However, what is more revealing about how Feynman thought, is what comes after it. Feynman doesn’t end there, but continues the thought in this fashion: “I, therefore, did learn a lesson: The female mind is capable of understanding analytic geometry. Those people who have for years been insisting (in the face of all obvious evidence to the contrary) that the male and female are equally capable of rational thought may have something. The difficulty may just be that we have never yet discovered a way to communicate with the female mind.” 


Feynman here seems to acknowledge the possibility that systemic issues, rather than innate differences, limited women’s participation in science. But he offers no solution to this problem and moves back to his main topic. He does not own it as his problem to solve for the whole of the country. Society for him is one thing, the scientific enterprise another, and he is primarily interested in the latter. 


Richard Feynman also had a younger sister, Joan. Although they were separated by nine years, Joan and Richard were close, as Joan was also very curious about how the world worked. Their mother was a sophisticated woman who had marched for women’s suffrage in her youth, but believed that women lacked the capacity to understand maths and physics. Despite that negative attitude at home, the young Richard encouraged Joan’s interest in science. From a very young age, he would train her to solve simple math problems and rewarded each correct answer by letting her tug on his hair while he made funny faces. By the time she was 5, Richard was hiring her for 2 cents a week to assist him in the electronics lab he’d built in his room. Joan grew up to become an astrophysicist, crediting her brother’s mentorship as a key influence. In his later years, Richard became acutely aware of the discrimination women faced in physics, because he saw how it affected his sister. For her part, Joan Feynman was awarded NASA’s Exceptional Science Achievement medal in 2002, for her continued support and encouragement for women to persevere and make their marks in science.


Feynman’s first marriage, to Arline Greenbaum, adds another layer of complexity. They were high-school sweethearts and by all accounts their love was profound and marked by mutual respect. Feynman wrote her heartfelt letters that revealed his deep admiration for her intellect and spirit. Arline was sick for a long time, even before their marriage, and eventually died of tuberculosis in 1945, while Richard was working on the Manhattan project. When she was near death, he rushed from Los Alamos to be by her side. You can read here a remarkable letter he wrote two years after Arlene’s death, where he pours out his heart. The letter was discovered in a stash of old letters by Feynman’s biographer James Gleick.

Arline Greenbaum and Richard Feynman


Richard Feynman got married again in 1952 to Mary Louise Bell. This second marriage was difficult, strained by differences in temperament and lifestyle choices, and ended in divorce. Mary had very conservative views and they quarrelled often. She was so fed up with his obsession with calculus and physics and reported that on several occasions, when she disturbed his calculations, which he would sometimes even do while he was lying in bed at night, or his bongo playing, he would fly into a rage. She filed for divorce in 1956. His third marriage, to Gweneth Howarth, who shared his enthusiasm for travelling and playfulness, was far more harmonious.


In the book “What do YOU care about what other people think?” Feynman recalls an incident where feminist protesters (led by a man, ironically) entered a hall and picketed a lecture he was about to make in San Francisco, holding up placards and handing out leaflets calling him a "sexist pig". As soon as he got up to speak, some of the protesters marched to the front of the lecture hall and, holding their placards signs high, started chanting “Feynman sexist pig!”. Instead of reacting defensively, Feynman addressed the protesters saying: “Perhaps, after all, it is good that you came. For women do indeed suffer from prejudice and discrimination in Physics, and your presence here today serves to remind us of these difficulties and the need to remedy them”.


Feynman’s attitudes certainly weren’t those of a consistent advocate for gender equality, as we might expect today, but they weren’t wholly regressive either. The idea of dismantling systemic barriers wasn’t part of his worldview, but he was not resistant to change and was willing to support those who defied convention.


Criticisms of Feynman’s legacy through the lens of presentism risks overlooking the full complexity of his character and how progressive some of his views were for his time. He was a complicated individual, whose brilliance was tempered by human imperfections.

He achieved remarkable things in his lifetime and inspired many physicists that came after him, both male and female. 


As with every figure who has left a mark on the landscape of history, fairness requires that we should be honest about who he was, acknowledging both his achievements and flaws, while considering the context of his time. His legacy cannot be flattened into an uncomplicated hero or villain narrative. 


Perhaps Feynman's most enduring legacy is to remind us that progress is born from questioning, curiosity, and the willingness to defy convention --all driven by the joy of discovery. To reduce such a complicated life to binary judgments, to refuse to celebrate it, pointing out warts and all, would be to forget why we study these figures at all—to question, to learn, and to grow.



Sunday, 31 December 2023

Artificial Intelligence : Digital Utopia or Dystopian Nightmare?

They shamelessly print, at negligible cost, material which may  inflame impressionable youths, while a true writer dies of hunger. Cure the plague which is doing away with the laws of all decency, and curb the printers. They persist in their sick vices, setting Tibullus in type, while a young girl reads Ovid to learn sinfulness. [...] Writing, which brings in gold for us, should be respected and held to be nobler than all goods, unless she has suffered degradation in the brothel of the printing presses.” So wrote Italian Benedictine monk Filippo de Strata in a letter to the Doge of Venice in 1490, complaining about the introduction of the printing press to the city.


["The Unrestrained Demon". An anti-electricity cartoon from 1889.]

Throughout history, each wave of technological innovation has been met with its own unique blend of curiosity and trepidation. Take, for example, the discovery of electricity; Initially regarded as little more than an amusing novelty, with public demonstrations of its effects ranging from Thomas Edison’s mildly entertaining electric pen to the more macabre spectacle of electrocuting animals, electricity also came to be feared as a potential danger to public health. As the decades passed, worries gradually subsided as the profound utility of electricity became apparent even to the staunchest skeptics, eventually establishing it as the bedrock of modern civilization. Consider another example: the Luddites, a group of textile workers in 19th-century England, who went about destroying weaving machinery to protest against job displacement, and who did not shy away from resorting to violence. 


These historical instances reflect an enduring concern: the anxiety accompanying new technologies and the fear that human skills will be rendered redundant. Similar concerns are directed towards the field of artificial intelligence (AI) today. 


AI has been with us for at least several decades, making great strides since the introduction of rules-based systems in the 60s, but it is only in the last few years that a significant milestone appears to have been achieved through a combination of large language models (LLMs), which are examples of Deep Learning processes, and the truly massive data sets that are used to train them. Although Artificial General Intelligence (AGI), which many consider to be the Holy Grail of AI research, still seems to be out of reach for now, narrow AI is routinely outperforming humans in several different highly specialised tasks.


So, what are we to do now in the face of AI's relentless march forward? Should we cross our fingers and hope for the best like new Pollyannas, or are we to become neo-Luddites, smashing away at every AI creation in the digital domain? The answer is neither. For one, these are still the early days and the technology is in its infancy. Granted, it has already demonstrated it can provide assistance in a number of different domains, but it often makes mistakes, hallucinates and is not particularly creative unless carefully steered by an expert user. 


An AI agent can provide seemingly insightful responses to questions about highly specialised subjects where an average person lacking the expertise would be unable to. Experts in particular topics, such as programmers, engineers, artists, scientists etc., have the necessary training to pose complex technical questions using highly specialised terminology and are able to understand the contextually specialised responses of the AI. These expert users are also able to identify problems and inadequacies in these responses and improve them through successive queries which, again, the average person is not in a position to do, due to lack of specialised training. 


On a practical level, an expert user is therefore able to employ the AI systems of today as semi-skilled collaborators, and to drive gradual improvements, seeking additional help as and when it is needed. It is only the experts, who have spent a lifetime honing their skills, that have the necessary know-how to push this technology to its limits, far beyond what a casual user is capable of.


For instance, suppose you compose music for a living, and that there exists an AI agent that can help with composing music. You could ask it to prepare a template for a theme that, even though it has yet to completely coalesce in your mind, you know that it must be in C minor and that the arrangement is reminiscent of Baroque works by, say, Telemann and Pergolesi. You guess the sound you have in mind is probably about 80% more similar to Telemann and only about 20% Pergolesi. Maybe there’s even a bit of Corelli in there but you are not sure. You know exactly which instruments you want to use, you know the harmony, etc. and you pass all this information to the AI asking it to give you a test theme. Maybe you don't like what it gives you. You ask it for variations until you find one that roughly matches what you have in your mind, or one that clicks and inspires you. Then you ask it to put all the notes on a staff, print the score and you edit the details making adjustments. Then you scan the score you worked on and send it back to the AI asking it to maybe improve the timing or change the speed of this or that note, until the result sufficiently satisfies you and is close to what you envision. In short, this collaboration with the AI will significantly simplify your work as a composer, while you remain the creative director of the entire process. 


Is this inappropriate? Consider Hans Zimmer, the famous film score composer, who can afford to employ a number of other composers, as well as orchestrators and sound engineers, to help him write and arrange the music for his movie scores. The use of AI could allow budget-constrained and less well-known composers to do something similar, and perhaps even to become competitive and get their music to reach new audiences.


All this will lead to a democratisation of the creative process and a creative inflation that will have a lasting impact across every professional field. To appreciate how this may play out, let us continue with our thought experiment in the music industry. Supply and competition will certainly be greater, with the marked difference that the playing field will be more level, with smaller composers now having the ability to challenge more established names. Output will exponentially increase, making it far more challenging to build and maintain a lasting reputation. In an ocean of mediocre compositions, the deciding factor will inevitably become the uniquely personal touch the composer imparts to their music. 


None of this is sufficient to conclude that fewer people will consider music as a viable career option, but it will most definitely affect how musicians build a career. It is also unlikely that people will suddenly stop wanting to learn how to master musical instruments, a difficult process which did not disappear even when synthesisers and electronic music were invented. There may even be increased interest in attending live events, such as concerts and recitals. 


Take the example of painting and photography. Photography did not destroy painting; it rejuvenated it. It redefined the meaning of the art of painting and as a bonus created the altogether new branch of artistic photography and related professions. When faithful representation in painting succumbed to the undisputed superiority of the precision of the photographic plate in the late 19th/early 20th century, the creator was freed from a strict adherence to realism, thus giving birth to modern art, and a new generation of groundbreaking artists came to the foreground: Picasso, Dalí, Monet, Manet, Kandinsky, Van Gogh, and many, many others, all mounted this new wave. 


There is not enough space here to elaborate on the very significant social knock-on effects these developments had; this is an exercise better left to historians. The bottom line is that it would be at least disingenuous to persist on the claim that photography destroyed painting, for the additional reason that realistic representations still remain an active branch of painting today. 


In today’s modern world, individuals who make a living exclusively in the field of creative painting, have a much easier time doing so compared to their predecessors in the 18th and 19th centuries, who could achieve little without the continued support of rich sponsors. Of course, we must concede that all this was made possible due to fundamental changes in social conditions for the better, but these very changes themselves were significantly influenced by the historical developments mentioned previously. It is hard to disentangle with a high degree of confidence exactly how all these trends fed on each other. Society huffed and puffed, blew the doors down, and replaced an obsolete structure with a more elaborate one.


In the numerical sciences, the pocket calculator, and later the computer, did not eliminate the need to learn algebra. They accelerated the ability to perform complex calculations to an incredible degree, but did not make the learning of the underlying mathematical rules and methods irrelevant. We still continue to teach these rules and methods all the way from elementary school to university. 


Where these technological developments have clearly made a difference, is in the fact that we now recognize that there are better mechanisms available to us for controlling the accuracy of our results and for minimising errors, and we employ them. No researcher today would expect their doctoral student to perform all calculations by hand, because of the comparatively greater likelihood of introducing small errors somewhere along the line, which can cost greatly, both in time wasted and increased frustration. What researchers are interested in, is the proper scientific analysis of their measurements.


In any case, the further development of AI is of such great importance that it now constitutes a strategic necessity for every country, so any discussion about impacts and limitations should start with this as a given. There are  problems and challenges that are indeed significant, but they are potentially solvable by adjusting existing socio-economic models or by introducing novel solutions, as has happened again and again in the past. As societies gradually grow more accustomed to these changes, they are better able to absorb cultural shocks and reorganise around new points of equilibrium. Perhaps the greatest challenges AI will bring are of a different nature. For example, how will we be able to prevent a race to the bottom when it comes to autonomous smart weapons, and how can we ethically align the goals of a generalised AI with human goals. AI is designed to find unique solutions to very specific questions, and these may not always be the answers we would hope for. This is especially true when the AI is faced with complicated ethical choices.


Now let us consider what AI can do for teaching and research. Yes, AI will eventually be able to solve standard school exercises, and explain all the intermediate steps in detail. That is, it will be able to provide specific solutions to well-formulated questions from a range of known types of problems. This suggests that the education system will need to adjust in response, and focus less on methodology and information gathering, which can easily be automated, and more on developing critical thinking and analysis skills. On the fringes of scientific research, we often don't even have well-formulated questions, nor do we know exactly what questions it would be best to ask, nor how exactly to interpret the results if there is insufficient data. Solutions are often multidimensional and not unique, and AI methods can help us navigate through a complex parameter space.


Many parents and teachers have raised concerns about the state of math literacy, primarily in the West, compared to previous decades. These concerns are raised with the unstated assumption of ceteris paribus, which is obviously somewhat of an issue, because the skills of students in many other subjects, many of which did not even exist a few decades ago, are on an entirely different level regarding the collection and utilisation of available information. Even if we concede that students today could be weaker in mathematics, the fact remains that their particular skills more than make up for it.

Granted, this is an issue that should concern us, but it is not nearly as catastrophic as it is usually made to sound. Compare, for example, the level of students in computational analysis and numerical methods in the 70s with what they are able to achieve today. No comparison. The number of students choosing STEM subjects at university shows a steady increase, although dropout rates, especially in Physics, remain high.


Are all concerns about the development and use of AI to be dismissed as mere scaremongering? Of course not; that would be naive and dangerous. There are many well-grounded legitimate concerns. On a societal level, we need to think hard about how to regulate the AI industry and ways must be found for professionals to be remunerated when their work has been used to train AI models. How exactly that might work in practice is yet to be determined, but we better start having these discussions now.


Sooner or later every field and every profession will be affected. Moravec’s “Landscape of human competence” gets progressively more flooded as time goes by. Highly creative professions, which until recently were thought out of reach for at least a few more decades, are already feeling the first splashes. There are now AI copilots that can be used to write code and software engineers are already trying them out. They use them with a mix of elation and worry. In my own scientific field, we increasingly rely on machine learning to look for patterns in massive data sets. There is just too much of it, and it would take decades to do it without the use of such tools. 


The steady march forward of AI will inevitably cause major disruptions in how society currently operates, and I strongly suspect that it will make the introduction of some sort of UBI unavoidable. We have a long way to go.


Friday, 17 May 2019

Bohr, Oppenheimer, Einstein.

During WWII Bohr tried to convince Roosevelt and Churchill to work together with the Soviets in the Manhattan project to speed up the results. He tried to argue that it was extremely dangerous to have separate development of nuclear weapons by several powers rather than some form of controlled sharing of the knowledge. Churchill would not hear of it. In 1944 the allies learned that germans had no atomic weapons and Bohr hoped then that greater atomic armament could be prevented by new international agreements, before the weapons could be used in war. He also hoped that new contacts could be established between Western and Russian nuclear scientists, as signs of cooperation between the soon-to-be victorious powers in the war. Both Roosevelt and Churchill did not think so. In June 1950 he addressed an "Open Letter" to the United Nations calling for international cooperation on nuclear energy.
Oppenheimer is often called the ‘father of the atomic bomb’. After the war he was chief advisor to the Atomic Energy Commission and used that position to lobby for international control of nuclear power to avert nuclear proliferation and an arms race with the Soviet Union. After provoking the ire of many politicians with his outspoken opinions he had his security clearance revoked in a much-publicised hearing in 1954, and was effectively stripped of his direct political influence; In a TV broadcast in 1965, he said this about the Trinity nuclear test, the first ever nuclear test in New Mexico: “We knew the world would not be the same. A few people laughed, a few people cried. Most people were silent. I remembered the line from the Hindu scripture, the Bhagavad Gita; Vishnu is trying to persuade the Prince that he should do his duty and, to impress him, takes on his multi-armed form and says, 'Now I am become Death, the destroyer of worlds.' I suppose we all thought that, one way or another.”
Einstein was a known pacifist. He hated armies. Yet he supported the Manhattan project. In a letter he sent to Roosevelt he wrote that there is the possibility that German scientists might win the race to build an atomic bomb, and warned him that Hitler would be more than willing to resort to such a weapon. He recommended that the US become directly involved in Uranium research – this led to the Manhattan project. In a letter to a friend one year before he died he wrote: "I made one great mistake in my life — when I signed the letter to President Roosevelt recommending that atom bombs be made; but there was some justification — the danger that the Germans would make them”. In 1945 he realised that an atomic arms race might begin and wrote another letter to Roosevelt enclosing a warning against using the A-bomb. The letter was still unopened on Roosevelt's desk when he died. Einstein was chairman of the Emergency Committee of Atomic Scientists, set up in 1946; its aims were to educate the public about the dangers of atomic warfare, to promote the benign use of atomic energy, and to work for the abolition of war. Not afraid of swimming against the tide, he tried hard to create links with the Soviet Union and to prevent the escalation of the Cold War.

Friday, 29 December 2017

Conversations with a 5 year old - Trains and death


We’re at the train station of a former DDR town with time to spare before our connection arrives. A few steps outside the fence gate stands a black-varnished antique steam locomotive, built in the 1920’s, according to an inbuilt inscription. The last time it moved people from here to there was in the 60s’. It was then retired and placed on a pedestal to remind passers-by of the remarkable strides of technology since the dawning of the industrial revolution. My son likes trains so we naturally gravitate towards this fossil of a not so distant era.

- Dad, is this an old train?
- Yup.
- How old?
- It says here it’s about 100 years old.
I know he gets that. Anything above is just ‘hundreds and hundreds’.
He’s clearly impressed so I press on, “and you want to know something cool? This train can move without electricity. See this door?” I point to the front of the boiler; “You can put wood in there and make a fire. This train also had a lot of water in it and it boiled when you made a fire. Hot steam then came out and made these things move”, I show him the pistons, ”can you see where they are connected?”
- The wheels!
- Yup, they push the wheels and make them go round, so the train moves.
- Cooool. and can it go very, very fast?
- Not very fast. The ICE is much faster. This train is quite old and we don’t use it anymore.
He takes a good look at the train.
- And what will happen to it when it is older?
- Well, it will probably stay here for some years and then it will be gone.
- Why will it be gone?
- Because things don’t stay forever as they are, they change. See these houses? A long time from now they will be gone too, and maybe other new houses will be there.
- Our house too?
- Yup, our house too, but not for a long time.
- And you and mama too?
- Yup, mama and I too.
- And Mina (his sister) and I too?
- Yes, you too, but not for a very long time. Nothing lasts forever but it’s not something to worry about because you will have all the time in the world to play with your friends and laugh and make new cool things and have lots and lots of fun!
- Why?
- Well, you see, everything is made up of these really tiny lego blocks. They're not really lego blocks, but they are very similar. We call them atoms. This train here, touch it, it’s hard right? It’s made up of special lego blocks that are quite hard when put together. They are called metals. Now touch my hand, it’s soft, right? These are a different special type of lego block and you can put many of them together and they make up all living things, like me and you and animals and trees and so on.
- And houses?
- Nope, houses are not alive. They don’t eat and drink, and they don’t do kaka and pipi.
The laughter eventually subsides.
- And you see, what happens with all things when they are very, very old, the lego blocks that make them up are taken apart and something new can be built. So maybe some of the lego blocks that now make up papa or mama or Albert will be used to make a cool new flower or a tree or another person or something else. Like when you take your lego spaceship apart and build something else with the bricks.
- And what happens then?
My partner now picks up the baton,
- Well, in the end we all go back to the stars. That’s where all the lego blocks come from. And that’s pretty cool isn’t it?

And there’s that smile again.

Friday, 26 December 2014

How do we know how old Everything is?

How do we know the age of fossils, the Earth, distant stars and the Universe itself?
Fraser Cain explains in this short video.

Monday, 8 December 2014

Ποιά η αξία της έρευνας;

1.4 δισ. κόστισε η διαστημική αποστολή Ροζέτα. Μηδενικά λένε τα οφέλη για τον άνθρωπο. Αχρείαστη σπατάλη λένε.


Θα αρχίσω με τα βαρετά.

Ας δούμε τους αριθμούς.
Το συνολικό κόστος της αποστολής (1996-2015) ήταν €1.4 δισ. (κατά μέσο όρο €74.7 εκ. ετησίως).
Δηλαδή €3.2 για κάθε Ευρωπαίο φορολογούμενο (€0.2 το χρόνο απο το 1996 μέχρι το 2015).

Για να το δούμε και συγκριτικά:

  • Tιμή εισιτήριου σινεμά Αθήναιον (Παρ.-Κυρ.): €7.5
  • Κόστος 4 αεροπλάνων Airbus A380: €1.7 δίσ.
  • Κόστος Αμερικανικών εκλογών (έτος 2012): $6.9 δίσ. (περίπου €5.0 δισ.)
  • Κόστος Ολυμπιακών «Αθήνα 2004»: €8.95 δίσ.
  • Κόστος δημόσιας υγείας Μ. Βρεττανίας (NHS - έτος 2012): £121.3 δίσ. (περίπου €151.8 δίσ.)
  • Συνολικές εξοπλιστικές δαπάνες Ευρωπαϊκής Ένωσης (άθροισμα εθνικών δαπανών κρατών μελών - έτος 2012): €192.5 δισ.


Αλλά ποιά ήταν η αποστολή του Rosetta;
Να μελετήσει τη χημική σύνθεση ενός κομήτη, απομεινάρια απο τη δημιουργία του Ηλιακού συστήματος. Να μας δώσει πληροφορίες δηλαδή για τις συνθήκες που επικρατούσαν στο Ηλιακό σύστημα όταν αυτό ήταν στα γεννοφάσκια του. Κάποια απο τα συστατικά τέτοιων κομητών πιστεύουμε ότι έπαιξαν ρόλο στη δημιουργία των ωκεανών στη Γή - και συνεπώς στην εμφάνιση της ζωής.

Πέραν των ηλιακών συλλεκτών ρηξικέλευθης τεχνολογίας και την έμπνευση που θα αποτελέσει για τους επιστήμονες του αύριο, η αποστολή δέν έχει άμεσα και μετρήσιμα οφέλη. Όπως κάθε φιλόδοξη έρευνα που σκαλίζει μεθοδικά μέρος της ανθρώπινης άγνοιας για να αποκαλύψει κάτω απ’την κρούστα το πρόσωπο του μέλλοντος, δέ χρειάζεται να έχει.

Λέγεται ότι όταν ο πάμπτωχος Faraday είχε πλέον γίνει διάσημος, τον κάλεσε η βασίλισσα Βικτώρια για δείπνο στο παλάτι. Κάποια στιγμή τον ρώτησε ‘ …και δέ μου λέτε, σε τί μπορεί να χρησιμέψει αυτός ο “ηλεκτρισμός;”’ και της αποκρίθηκε ‘κυρία μου, σε τί χρησιμεύει ένα μωρό;’ (Κατά μιά άλλη εκδοχή η ερώτηση ήταν του πρωθυπουργού Peel, και η απάντηση ανάλογη: ‘Μα κύριε, ενδέχεται να είναι είδος φορολογήσιμο!’)

Σε τί χρησιμεύει η ανακάλυψη της φωτιάς; του τροχού; της πυρίτιδας; της Αμερικής; της ραδιενέργειας; Ότι μάζα και ενέργεια είναι ισοδύναμες; Ή η γνώση ότι η Γή είναι σφαιρική κι όχι επίπεδη και οτι δέν είμαστε το κέντρο του Σύμπαντος; Με τί μέτρα και σταθμά θα ζυγίσουμε το εκτόπισμα της νέας γνώσης και πότε θα πρέπει να γίνει η μέτρηση;

Όλες οι μεγάλες ανακαλύψεις που σε βάθος χρόνου αφήνουν ανεξίτηλα τα σημάδια τους στην ιστορία λειτουργούν ώς οδοσήματα στους δαιδαλώδεις διαδρόμους της. Δέ χρειάζεται να προσπαθούμε να τις δικαιολογήσουμε στο παρόν, γιατί δέν μπορούμε να ξέρουμε τί εγκυμονούν.

Είναι ελπιδοφόρο ότι η ανθρωπότητα ακόμα δείχνει να διαθέτει κάτι απ’ το νεανικό της σφρίγος παλεύοντας να αποκτήσει καινούριες γνώσεις αποκλειστικά και μόνο γιατί απολαμβάνει το ταξίδι. Όταν πάψουμε να επιδοτούμε την έρευνα που δέν έχει άμεσες πρακτικές εφαρμογές θα επέλθει αρτηριοσκλήρωση. Και τότε ας σφραγίσουμε τις πόρτες κι ας σβήσουμε τα φώτα.

Πηγές:
athinorama
Airbus
BBC
Το ΒΗΜΑ
Reuters
European Defence Agecy Data Portal
Office of National Statistics (UK)
European Space Agency