a blog about philosophy in public affairs

Category: Health

‘Truman Care’ for Dementia

On the outskirts of Amsterdam, there is a small village called Hogewey, notable because all of its 152 residents have severe or extreme dementia. Hogewey is a gated model village, complete with town square, post office, theatre, hair salon, café-restaurant and supermarket – as well as cameras monitoring residents around the clock, and well-trained staff working incognito, holding a myriad of occupations such as post-office clerks and supermarket cashiers. Every detail of this ‘fake reality’ has been meticulously designed to ensure that the residents can experience life as close to ‘normal’ as possible. Critics have drawn parallels with the deception depicted in the 1998 ‘social science fiction’ film The Truman Show; but many Alzheimer’s experts have praised the pioneering facility for being the first to adjust ‘our’ reality to allow those with dementia to be in a safe and comforting environment – one built around life rather than death.

Taking inspiration from Hogewey, the Grove Care nursing home in Winterbourne, Bristol have developed ‘Memory Lane’; a recreation of a 1950s high street, including a Post Office, pub, bus stop, phone box and shop windows full of memorabilia.

I’d like to briefly outline two sets of reasons for thinking we should move towards this model of care (all-day reminiscence therapy, or ‘Truman Care’ if you like), and to then briefly discuss what I assume to be the main problem facing this kind of move.

21st Century Smoking

 

At the British Medical Association’s (BMA) annual representatives meeting this week, doctors voted overwhelmingly to push for a permanent ban on the sale of cigarettes to those born after 2000.* What are the different reasons that might motivate, and potentially justify, the state intervening in citizens’ smoking behaviour? Broadly speaking, the main distinctions are those drawn between: (1) welfare- (both individual and collective) and autonomy-based reasons; (2) ‘harm to self’ and ‘harm to others’, that is, for the sake of smokers versus for the sake of non-smokers generally; and, relatedly, (3) an aim to increase tobacco use cessation (i.e., stop smokers smoking) versus an aim to reduce tobacco use initiation (stop people from starting to smoke in the first place). Accordingly, an initial taxonomy of reasons might have the following six cells:

Welfare-based reasons
Autonomy-based reasons
Smokers
Welfare of smokers
Autonomy of smokers
Non-smokers
Welfare of non-smokers
Autonomy of non-smokers
Potential smokers
Welfare of potential smokers
Autonomy of potential smokers

The Need for Content Notes and Trigger Warnings in Seminars

Photo by Goska Smierzchalska / CC BY-NC 2.0

Content note: this post contains a discussion of sexual violence and rape.

A few weeks ago I was at a seminar where the speaker unexpectedly diverted from the main topic of their paper and used a rape example to support their argument. As discussions of rape in philosophy seminars go it was not particularly insensitive. But what disturbed me was that from the pre-circulated paper’s title and abstract there was no indication that it would include a discussion of rape. A victim of rape or sexual violence would have had no warning that they were about to be confronted with an extended discussion of it. Given the appalling statistics on rape and sexual violence that would almost certainly have included several people in the room. For them the discussion of rape might not have been just another abstract thought experiment, but an intensely triggering experience that brought back memories they did not want to deal with at that point. It made me think that the speaker could have respected this possibility by sending a short ‘content note’ (like the one above) with the abstract warning people that the seminar would contain a discussion of rape.

Over the last few months there has in fact been a lot of online discussion over the use of content notes and trigger warnings1 in academia. The recent debate was sparked by students at several US universities calling for content notes/trigger warnings to be included in course syllabuses. The idea behind these is to warn students that certain readings in the course contain discussions of topics that might be stressful or triggering. Much of the ensuing criticism has taken the line that they represent a ‘serious threat to intellectual freedom’ and even ‘one giant leap for censorship‘. This criticism is unfortunate because it falsely suggests that content notes/trigger warnings are there to stop or censor discussions of sensitive topics. Instead the point of the them is to facilitate these discussions by creating a safe and supportive environment where people are given the choice over how and when they engage with topics that they know can be immensely painful for them. As Laurie Penny argues “Trigger warnings are fundamentally about empathy. They are a polite plea for more openness, not less; for more truth, not less. They allow taboo topics and the experience of hurt and pain, often by marginalised people, to be spoken of frankly. They are the opposite of censorship.”

Perhaps some of the hostility to content notes/trigger warnings comes from a lack of knowledge about how they could work. People seem to imagine them as these big intrusive and ugly warnings. I think an actual example of a content note shows us how far from the truth this is:

Course Content Note: At times this semester we will be discussing historical events that may be disturbing, even traumatizing, to some students. If you ever feel the need to step outside during one of these discussions, either for a short time or for the rest of the class session, you may always do so without academic penalty. (You will, however, be responsible for any material you miss. If you do leave the room for a significant time, please make arrangements to get notes from another student or see me individually.) 

If you ever wish to discuss your personal reactions to this material, either with the class or with me afterwards, I welcome such discussion as an appropriate part of our coursework.

Though much of the online discussion has focused on syllabuses and student seminars, I think it is important to recognise that the same arguments also apply to seminars among professional academics. I think we academics sometimes falsely assume that the standards and principles we apply to student and non-academic discussions do not apply to our own professional practices. An academic giving a paper or a lecture which includes discussions that are potentially triggering should give attendees advance notice of this. This allows people to prepare themselves and not have it sprung upon them, and even the opportunity to avoid coming at all if they feel they are not able to cope with the discussion that day. Of course this does not address what is said during the ensuing question period. It does not stop another academic from insensitively using an example of rape or sexual violence when they respond to the speaker. Content notes and trigger warnings cannot (and are not supposed) to cover every possibility. To address that we could start by educating academics about what its like to be a victim of rape and hear examples of rape used casually in philosophy seminars.

Some have argued that “life doesn’t come with a trigger warning” and tried to suggest that using them in any situation is therefore pointless. While we may not be able to change everything, seminars are a small sphere of life that we have the power to make less hostile and more welcoming.



1 Content notes and trigger warnings are frequently confused. The difference is that “Trigger warnings are about attempting to identify common triggers for panic attacks and related experiences and tagging media for the benefit of people who find it helpful to be warned when media contains this material. Content notes are simply flags with information about content, to be used at the discretion of the person who encounters them.”

Capping Working Hours

Recently, the scandalous decisions of some investment banks to treat their employees like human beings by suggesting they take Friday nights and Saturdays off has raised much debate amongst financial journalists and their ilk.
The issue of long working hours is not limited to investment banks; a survey in the US of 1000 professionals by Harvard Business School found that 94% worked fifty hours or more a week, and for almost half, working hours averaged over 65 hours a week. With the increase of automisation in production chains moving labour into customer facing service roles, more individuals will likely face this challenge in their daily lives.
There are good reasons to think that these hours are not useful at all. Economists have long known that as working hours increase, the marginal production of workers fall – mistakes increase and the quality of work produced falls.
More importantly than the impacts on productivity however, are morally relevant considerations that are related to cultures of long working hours:
Industries with long working hours are typically biased in favour of those who do not have other commitments which limit their available time – most notably child care and currently, in our society, this means women. Economist Claudia Goldin finds that gender gaps in wages are greatest in those industries which exhibit “non-linear pay structures” – essentially those in which individuals who can work extremely long hours are disproportionately rewarded. This describes most jobs in the corporate, financial and legal worlds.
There are important health implications of longer working hours with significant evidence that those who work longer than 48 hours a week on a regular basis are “likely to suffer an increased risk of heart disease,stress related illness, mental illness, diabetes and bowel problems.”
Finally there are various employment related issues worth considering – for example would unemployment be decreased if each 100 hour-per-week job were split into two of 50? Would such a policy help reduce the concentration of power in organisations as key managerial tasks would likely have to be increasingly shared?
While our society may gain significantly from moving away from working long hours, it will always be incredibly difficult for any firm to act unilaterally in this matter due to substantial co-ordination failures in this area.
The appropriate response, I believe, is for Government to intervene with a hard cap of 48 hours per week that applies across almost all industries, with no built in exceptions beyond those which are absolutely essential. The current EU working times directive which is supposed to provide a similar function, is farcical in its ability to constrain individuals from working, due to the amount of exceptions and opt out clauses built into it.
A hard cap of 48 hours would be hard to implement, would have some uncomfortable implications (for example – forcing individuals who enjoy their jobs to go home and stop working) and would likely have some negative consequences on the economy. However there would seem to be substantial positive gains to be made and I believe that these are large enough to justify developing such a cap.

*Update: Marxist economist, Chris Dillow, has an excellent post describing how problems like long working hours can naturally arise without actually benefiting anyone.

‘Social’ Deprivation

To say that a citizen suffers social deprivation is typically thought to imply that the citizen suffers poverty, has poor education, and has a low socioeconomic status. In this blog post, I am not concerned with social deprivation conceived in this way. Rather, what I understand by ‘social’ deprivation is ‘a persisting lack of minimally adequate opportunities for decent human contact’*. According to this definition, citizens suffer social deprivation when they are denied minimally adequate opportunities for interpersonal interaction, associative inclusion, and interdependent care, for example.
 
Social deprivation is closely related to loneliness – defined as the perceivedlack of opportunities for valuable human contact. A 2010 survey by the Mental Health Foundation reported that, in the UK, only 22% of citizens never feel lonely, 11% feel lonely often, and 42% have felt depressed as a result of loneliness. More tellingly, the survey also found that 48% of citizens strongly agree or agree that people are getting lonelier in general. Strictly speaking, loneliness need not be caused by social deprivation; however, it seems reasonable to think that social deprivation will often play an important causal role.
 
Worryingly, the adverse affects of social deprivation and loneliness are manifold. For example, various empirical studies have revealed that both social deprivation and loneliness are associated with numerous adverse health outcomes and morbidity and mortality, in particular. Notably, loneliness is reported to be as much as a predictor of bad health as smoking! In addition to their adverse physiological effects, social deprivation and loneliness also have adverse psychological effects: in fact, in extreme cases, such as those involving long-term solitary confinement, social deprivation and loneliness are often reported to be as agonising an experience as torture.  
 
What is the significance of all of this? Clearly, this evidence suggests that, in addition to a concern for citizens’ material interests, we should also have a concern for citizens’ social interests. In other words, we have weighty reasons to care about, and to protect against, social deprivation and loneliness. In the remainder of this post, I outline and briefly defend two more specific proposals that aim at serving this end.
 
First, our concern for citizens’ social interests seems to suggest that we should prohibit use of institutionalised forms of social deprivation, such as long-term solitary confinement and medical isolation and quarantine. Instead, and even if it is more expensive, we should look to use alternative practices that serve the same function as the original institution, but in a way that protects citizens’ interest in decent human contact. The argument here is simple: evidence suggests that these practices cause considerable psychological and physiological harm, and this harm far outweighs the level of harm citizens – and even serious criminal offenders – are liable to bear.
Second, our concern for citizens’ social interests also suggests that we have weighty reasons to invest in infrastructure that is conducive to the protection of opportunities for decent human contact. This could take the form of mobility assistance for those, such as the elderly, who are most likely to suffer social deprivation, or subsidies for organisations, such as community pubs, that play an important role in meeting many citizens’ social needs. Failing to invest here amounts to risking neglect for citizens’ social interests and, for this reason, must be avoided. 

*I take this definition of ‘social deprivation’ from Kimberley Brownlee, ‘A Human Right Against Social Deprivation’, The Philosophical Quarterly, 63 (2013), 199-222. 

An Age Old Old Age Question

It is a truth universally acknowledged that the UK population is ageing. To be precise, by 2050 there will be 19million people aged over 65. What is more rarely acknowledged is the scale of the problem this poses. Old age is the price any society pays for improved health care; the trouble is our society simply cannot afford to pay it. In an ideal world of unlimited resources the just solution may be for the state to cover the costs of everyone’s social care. Alas we do not live in such a world. A years stay in an older people’s residential home can cost upwards of £30000. Multiply that by 950 000 (around 5% of older people currently require care) and the bill is staggering.

 

I intend outline three practically feasible alternative payment mechanisms and consider some of the potential injustices these systems may pose. There will be no 500 word dash to the most plausible/least objectionable/insert-political theory-phraseology here solution. I simply wish to generate some debate around one of the least fashionable, but most pressing, policy issues of our generation. Additionally, I would like to implore political theorists to consider justice through the lens of a real world policy problem. We do not only ourselves but our society a disservice if we are unwilling to be stirred from our ivory towers to get down and dirty in the dilemmas of real world policy making. And in any case, in 50 years time we will all be reaping the life that we sow now.

 

So, possible solution one: make individuals pay, but provide a safety net for those who cannot. This is pretty much how the system operates in the UK at present. There are two main problems with this. First, the safety net care paid for by the state is inadequate. State funded care is poor in quality and choice and, with the pressure on it increasing, is only likely to get worse. The NHS is based around the intuition that people should not receive inferior care because they cannot afford to pay – why should this be any different in older people’s care? Second, it is highly debateable whether it is fair to ask people to pay for their own care. Not everyone who gets old will need social care. Is it fair to ask an old person who is unlucky enough to need care to pay, often exhausting all their assets in the process, when their neighbour of good health will not part with a penny?

 

Solution two: up taxes such that all care can be funded. Putting to one side the usual questions that surround high taxes (will it destroy the UK economy, will the super rich move abroad and so on) this seems to be unfair because it places the bulk of the burden on the younger generation. Those who are already retired will avoid having to pay for their care without ever paying any form of punishing tax for it. Given the youth of the UK are already facing greater economic hardships and fewer opportunities than their parent’s generation, is it fair to disadvantage them further by levying a new tax? Or is this one off disadvantage one society must accept for a better care system for future generations? Further, is such a tax sustainable? The latter is an empirical question which depends on economic recovery and projections. In any case, any tax that would be sufficient to cover the scale of the problem would need to be substantial.

 

Solution three: people are left to insure themselves against the risk of expensive social care. There are already companies that provide services akin to this, but premiums are so high that few people choose to opt for them. This may be more palatable than high taxes because people choose whether or not to insure themselves against the risk of high costs, meaning it is less financially punishing and less paternalistic. Unfortunately, the flip side of an absence of paternalism if that people may fail to insure themselves altogether, meaning people could be forced to pay high costs for their bad decisions in later life. It is my opinion that some form of compulsory insurance system may be the least unpalatable option, not least because old people insuring themselves now would pay significantly more than young people, and pay this equally, thus baring the cost of their generation’s care themselves. However, additional to the problem of paternalism this measure would also be highly inconsistent. There are many things it may be beneficial for people to insure themselves against that are not compulsory. How could this inconsistency be justified? Ultimately, my answer to that is that consistency should be an aid to justice, and not an end in itself. I for one would rather live in an inconsistent society with more just outcomes than one where consistency is pursued above all else.

cheap jerseys cheap mlb jerseys cheap mlb jerseys

Page 6 of 6

Powered by WordPress & Theme by Anders Norén