Your kids want to make Minecraft YouTube videos – but should you let them? | Technology | The Guardian

Don’t put your daughter on the stage, Mrs Worthington. But in 2016, what if the stage is YouTube, and your daughter (or son) is demanding to be put on it, playing Minecraft?That’s the dilemma facing a growing number of parents, whose children aren’t just watching YouTube Minecraft channels like The Diamond Minecart, Stampy and CaptainSparklez – they want to follow in their blocky footsteps.

Fuente: Your kids want to make Minecraft YouTube videos – but should you let them? | Technology | The Guardian


Regular las redes sociales, una medida problemática – ONG Derechos Digitales

Regular las redes sociales, una medida problemática – ONG Derechos Digitales.

 Con un creciente número de iniciativas legales que buscan regular plataformas como Twitter y Facebook en el mundo y también en América Latina, nos concentramos en las claves de este debate y sus implicancias en nuestros derechos.

Desde Francia hasta Bolivia, diversas iniciativas para regular las redes sociales se están presentando alrededor del mundo, una medida problemática.Desde Francia hasta Bolivia, diversas iniciativas para regular las redes sociales se están presentando alrededor del mundo, una medida problemática.

Esta semana, en una noticia que tristemente ya no es novedad, nos enteramos que un juez ordenó el bloqueo de Twitter y YouTube a los operadores de internet de Turquía, el que fue reestablecido algunas horas después.

Pocos días antes, y tras la reciente difusión a través de las redes sociales de un rumor que indicaba que había una ola de secuestros de niños, el gobierno de Venezuela declaró la necesidad de regular las redes sociales porque mucha de la información que se comparte buscaría “crear caos”.

En América Latina, por cierto, Venezuela no es el único país donde las autoridades han declarado sus intenciones de regular las redes sociales. En un repaso somero, nos podemos encontrar con ejemplos en EcuadorPerú y Guatemala.

Pero incluso los países desarrollados y plenamente democráticos han promocionado iniciativas que buscan controlar las redes sociales: Australiaanunció un programa para monitorearlas en tiempo real y prevenir la propaganda terrorista; Francia quiere hacer legalmente responsables a las compañías como Facebook o Twitter que permitan alojar contenido xenófobo.

El análisis sobre la regulación de las redes sociales es complejo, con implicaciones directas en derechos como la libertad de expresión y la privacidad. A continuación, intentaremos dar algunas claves para comprender mejor lo que hay juega tras estas iniciativas.


I caught my husband watching pornography – I’m shocked | Life and style | The Guardian

I caught my husband watching pornography – I’m shocked | Life and style | The Guardian.

We have been married for more than 30 years, and I am deeply upset to learn that there is this hidden side to his character

My boyfriend rarely orgasms when we have sex

Ask Molly Ringwald: I’ve got a crush on a band-mate who is 15 years my junior

My husband and I are in our early 60s. We have been married for more than 30 years and are quite happy together, other than having had a range of family issues to deal with. Our sex life has dwindled, but we are still very affectionate.

The other night I went into my husband’s study unexpectedly and he seemed to be looking at pictures of naked women on his computer. I made no comment because there was an urgent matter requiring attention and we hurried away to attend to it. I think he believes that I didn’t see the screen.

I was shocked and wondered if I had imagined it. It seemed so out of character – he is a highly respectable, scholarly person, not inclined to tackiness. I checked his laptop a few days later – mainly to reassure myself that I had imagined it, or that they were paintings or something (he is an art fan). However, the history for that date was deleted, which was suspicious in itself. I located it in the system files and discovered he had been on a range of pornographic sites.

I am deeply, deeply upset by this. I am not prudish – it is not the pornography that I object to, but rather that I am so shocked by discovering this hidden side of his character. Am I overreacting?


If tech companies wanted to end online harassment, they could do it tomorrow | Jessica Valenti | Comment is free | theguardian.com

If tech companies wanted to end online harassment, they could do it tomorrow | Jessica Valenti | Comment is free | theguardian.com.

The courts may decide that sending threats over social media isn’t threatening enough to be a crime. Silicon Valley needs to step up or lose customers

woman computer concerned
When online harassment is routine, being online might become less of a part of women’s routine. Photograph: Alamy

If someone posted a death threat to your Facebook page, you’d likely be afraid. If the person posting was your husband – a man you had a restraining order against, a man who wrote that he was “not going to rest until [your] body [was] a mess, soaked in blood and dying from all the little cuts” – then you’d be terrified. It’s hard to imagine any other reasonable reaction.

Yet that’s just what Anthony Elonis wants you to believe: That his violent Facebook posts – including one about masturbating on his dead wife’s body – were not meant as threats. So on Monday, in Elonis v United States, the US supreme court will start to hear arguments in a case that will determine whether threats on social media will be considered protected speech.

If the court rules for Elonis, those who are harassed and threatened online every day – women, people of color, rape victims and young bullied teens– will have even less protection than they do now. Which is to say: not damn much.

For as long as people – women, especially – have been on the receiving end of online harassment, they’ve been strategizing mundane and occasionally creative ways to deal with it. Some call law enforcementwhen the threats are specific. Others mock the harassment – or, in the case of videogame reviewer and student Alanah Pearce, send a screenshot to the harasser’s mother.

But the responsibility of dealing with online threats shouldn’t fall on the shoulders of the people who are being harassed. And it shouldn’t need to rise to being a question of constitutional law. If Twitter, Facebook or Google wanted to stop their users from receiving online harassment, they could do it tomorrow.

When money is on the line, internet companies somehow magically find ways to remove content and block repeat offenders. For instance, YouTube already runs a sophisticated Content ID program dedicated to scanning uploaded videos for copyrighted material and taking them down quickly – just try to bootleg music videos or watch unofficial versions of Daily Show clips and see how quickly they get taken down. But a look at the comments under any video and it’s clear there’s no real screening system for even the most abusive language.

If these companies are so willing to protect intellectual property, why not protect the people using your services?


Los recortes de Google Brasil, tras la derrota ante Alemania | SurySur

Los recortes de Google Brasil, tras la derrota ante Alemania | SurySur.

jul212014

br derrota

El buscador de Internet eliminó resultados relacionados con la derrota ante Alemania por considerar que son ofensivos. Palabras como “derrotados”, “humillados” o “destruidos” quedaron bloqueadas, de la misma manera que quedó anulada la vinculación con “vergüenza”.

La ilusión de Brasil se vio truncada tras la abultada derrota frente a Alemania en semifinales. “Derrotados”, “humillados”, “destruidos”, ésas fueron algunas de las palabras más buscadas en Google junto al término Brasil. Sin embargo, el buscador no ofreció resultados relacionados con esas palabras, tampoco con “vergüenza” cuando se entraba en Trends, su herramienta para consultar las tendencias dentro del buscador en tiempo real.

Google ha creado una adaptación de su página de tendencias adaptada al Mundial; en la misma se muestran los resultados de los partidos, comentarios y curiosidades, así como las búsquedas más comunes antes y después de los encuentros. Para actualizar y publicar esta web han creado una redacción en la sede de Google en San Francisco, expresamente para el evento.

La intención de este grupo es usar las tendencias de lo que buscan los usuarios para, tras analizarlo, convertirlo en contenido social enfocado en conseguir difusión en Google+, Facebook o Twitter. Algunos ejemplos de este contenido podrían ser que, durante la final, junto a Alemania se buscaba la secuencia “cuatro estrellas”, haciendo referencia las que en lo sucesivo los teutones lucirán en su camiseta, mientras que junto a Argentina crecieron las búsquedas que tenían que ver con “mantener la fe”.

Sam Clohesy, responsable de este experimento, defiende la decisión de quitar las palabras negativas junto a Brasil. No se trata de una orden sino de una decisión que obedece a un sentimiento de compasión: “No queremos echar sal en las heridas. Una historia negativa sobre Brasil no va a tener necesariamente éxito en las redes sociales”. A diferencia de lo que sucede en el buscador, donde no se tienen en cuenta las sensibilidades, en esta herramienta se ha retocado el resultado real que marcaría la tendencia.


Innovación a cambio de jugar con sentimientos | Tecnología | EL PAÍS

Innovación a cambio de jugar con sentimientos | Tecnología | EL PAÍS.

Facebook seguirá haciendo experimentos con los usuarios

 

San Francisco 3 JUL 2014 – 11:25 CET

 

Sheryl Sandberg, número dos de Facebook, ayer durante una conferencia en Nueva Delhi. / Kuni Takahashi (Bloomberg)

Cobayas sin saberlo, Facebook jugó con las emociones de 689.000 usuarios sin previo aviso para un estudio académico dando por hecho que entra dentro de los ambiguos e interminables términos de uso de la red social. A la disculpa inicial de Adam Kramer, analista de datos y responsable del estudio, se han sumado dos voces que dejan claro que no es un error y Facebook pretende seguir por esa misma senda.

Mientras que Kramer insistió en que no se quería crear malestar, sino dar con las claves para saber cómo reaccionan sus suscriptores según lo que le leen, Sheryl Sandberg, la número dos de la red social, ha sido mucho más suave. Achaca el revuelo a un error de comunicación: “Forma parte de la investigación habitual en este tipo de compañías para probar diferentes productos y nada más. Lo hemos contado muy mal. Nos disculpamos por la comunicación porque no queríamos enfadar a nadie”. Su aclaración ha sido desde Nueva Delhi, en una conferencia donde ha anunciado que ya superan 100 millones de usuarios en India.

Monika Bickert, responsable de políticas públicas en la empresa de Mark Zuckerberg, es todavía más laxa: “En el futuro tenemos que asegurarnos de que somos transparentes, tanto con los organismos reguladores como con los que usan nuestro producto. Que sepan exactamente qué estamos haciendo”.

Las respuestas, tanto de Sandberg como de Bickbert, denotan que Facebook piensa seguir explorando el comportamiento de sus usuarios para analizar su posterior reacción. James Grimmelman, profesor de la Universidad de Maryland, mantiene una posición intermedia: “Cuando se hace una investigación, se avisa. Facebook lo pone en los términos de uso, pero no avisó de que alteraría el funcionamiento que hasta entonces era normal solo a algunos usuarios”.


Facebook apologises for psychological experiments on users | Technology | theguardian.com

Facebook apologises for psychological experiments on users | Technology | theguardian.com.

The second most powerful executive at the company, Sheryl Sandberg, says experiments were ‘poorly communicated’

 

 

Sheryl Sandberg
Facebook’s Sheryl Sandberg apologises for poor communication over psychological experiments. Photograph: Money Sharma/EPA

 

Facebook’s second most powerful executive, Sheryl Sandberg, has apologised for the conduct of secret psychological tests on nearly 700,000 users in 2012, which prompted outrage from users and experts alike.

The experiment, revealed by a scientific paper published in the March issue of Proceedings of National Academy of Sciences, hid “a small percentage” of emotional words from peoples’ news feeds, without their knowledge, to test what effect that had on the statuses or “likes” that they then posted or reacted to.

“This was part of ongoing research companies do to test different products, and that was what it was; it was poorly communicated,” said Sandberg, Facebook’s chief operating officer while in New Delhi. “And for that communication we apologise. We never meant to upset you.”

The statement by Sandberg, deputy to chief executive Mark Zuckerberg, is a marked climbdown from its insistence on Tuesday that the experiment was covered by its terms of service. The secret tests mean that the company faces an inquiry from the UK’s information commissioner, while the publishers of the paper have said they will investigate whether any ethics breach took place. Psychological tests on human subjects have to have “informed consent” from participants – but independent researchers and Facebook have disagreed on whether its terms of service implicitly cover such use.


How does Facebook decide what to show in my news feed? | Technology | theguardian.com

How does Facebook decide what to show in my news feed? | Technology | theguardian.com.

Controversial emotion study is a reminder that the social network’s filters are constantly at work in the background

Facebook study breached ethical guidelines – researchers

How does Facebook filter my news feed?

 

 

The average Facebook user sees 300 updates a day out of a possible 1,500.
The average Facebook user sees 300 updates a day out of a possible 1,500. Photograph: DADO RUVIC/REUTERS

 

Facebook is secretly filtering my news feed? I’m outraged!

Not so secretly, actually. There is controversy this week over the social network’s research project manipulating nearly 700,000 users’ news feeds to understand whether it could affect their emotions.

But Facebook has been much more open about its general practice of filtering the status updates and page posts that you see in your feed when logging on from your various devices. In fact, it argues that these filters are essential.

Essential? Why can’t Facebook just show me an unfiltered feed?

Because, it argues, the results would be overwhelming. “Every time someone visits news feed there are on average 1,500 potential stories from friends, people they follow and pages for them to see, and most people don’t have enough time to see them all,” wrote Facebook engineer Lars Backstrom in a blog post in August 2013.

“With so many stories, there is a good chance people would miss something they wanted to see if we displayed a continuous, unranked stream of information.”

Bear in mind that this is just an average. In another blog post, by Facebook advertising executive Brian Boland in June 2014, he explained that for more intensive users, the risk of story overload is greater.


Using Facebook for science – a guide for megalomaniacs | Dean Burnett | Science | theguardian.com

Using Facebook for science – a guide for megalomaniacs | Dean Burnett | Science | theguardian.com.

Facebook was recently criticised for conducting research on users without their knowledge. One consideration seems to have been overlooked, however, which is that there’s nothing to say this is the only study Facebook is doing. What else could it be up to, given its influence?

 

A Facebook error message is seen in this illustration photo of a computer screen
Facebook seems to have many problems, but what if all of these are due to scientific experiments intended to exert control over the planet? Photograph: Thomas White/Reuters

 

I recently heard some commercial radio DJ’s claim that “nobody uses Facebook any more”. Facebook is the most popular social network on Earth, with more than 1 billion users, so this is really stretching the definition of “nobody”.

Given its direct access to nearly 15% of the human race, it would be fair to say that Facebook wields considerable power, so perhaps it shouldn’t be surprising to find it has been running experiments on some users, without their knowledge.

The reaction to this has been extensive and varied. Some are OK with it, many aren’t. There are people better qualified than me to discuss the ethical/methodological concerns and scientific validity of this study. But one thing that shouldn’t be overlooked is, who says this is the only research Facebook is doing? It has published the findings of one experiment, but why stop there? With its influence and resources, there’s no telling what it could be up to. Maybe the quirks and irritations many Facebook users complain about are the result of active scientific manipulation?

So here are just a few possible studies that we could be unwitting subjects for.


Facebook experimentó con 689.000 usuarios sin su consentimiento | Tecnología | EL PAÍS

Facebook experimentó con 689.000 usuarios sin su consentimiento | Tecnología | EL PAÍS.

 

Un usuario consulta una página de Facebbok. / reuters

Enviar a LinkedIn 65
Enviar a Tuenti Enviar a Menéame Enviar a Eskup

Enviar Imprimir Guardar

Una semana de experimento y millones de comentarios, en su mayoría negativos, han sido las consecuencias de un estudio llevado a cabo por varios ingenieros de Facebook. La mayor red social del mundo tomó 689.000 perfiles, sin aviso o consentimiento, para analizar su comportamiento alterando el algoritmo que selecciona las noticias que se ven de los amigos. Un grupo veía noticias positivas, el otro, negativas.

La indignación ha surgido al conocerse la publicación del estudio en la web de la Academia Nacional de Ciencias de Estados Unidos. Para la prueba se tomaron, exclusivamente, perfiles que escriben en inglés. El rechazo a comentar o interaccionar con los contenidos de tinte negativo, demasiado emotivos o cercanos a la tristeza, era mucho más alto. En ocasiones hasta un 90% de lo habitual. El estudio concluye que sí, que el ánimo de los comentarios de los contactos de Facebook invita a seguir en la deriva negativa o positiva, según el grupo que les tocase, al cabo de una semana.

Nos importa el impacto emocional de Facebook en las personas que lo usan

Adam Kramer, coautor del estudio

Este tipo de experimentos basados en interacción son muy comunes en ciertas webs y, sobre todo, en comercio electrónico, pero sin tener el cuenta el tinte del contenido. Se denomina A/B testing a mostrar una presentación (ya sea la distribución de la página o el estilo de los iconos) diferente bajo una misma web para poder estudiar si se emplea más tiempo en la misma, se hace más clic… pero nunca usando el tono del contenido como un componente más.

En un primer momento Facebook se limitó a decir que los posts, actualizaciones de estado, se podían consultar de manera habitual, sin matizar que la selección de una u otra opción (noticia positiva o negativa) con un fin experimental era dónde residía la ruptura de confianza con sus suscriptores. A última hora del domingo, a través del perfil de Adam Kramer, coautor del estudio y analista de datos dentro de la firma, se daba una explicación algo más concreta: “Nos importa el impacto emocional de Facebook en las personas que lo usan, por eso hemos hecho el estudio. Sentíamos que era importante investigar si ver contenido positivo de los amigos les hacía seguir dentro o si, el hecho de que lo que se contaba era negativo, les invitaba a no visitar Facebook. No queríamos enfadar a nadie”.

No queríamos enfadar a nadie

Adam Kramer, coautor del estudio

En la explicación asegura que solo afectó al 0,04% de los usuarios, uno por cada 2.500 perfiles, durante una semana a comienzos de 2012. La red social ha contestado al diario británico The Guardian que su intención era mejorar el servicio para mostrar contenido más relevante y que crease una mayor cercanía con la audiencia.

En lo que no parecen reparar dentro de la web de Mark Zuckerberg es que el malestar se crea en el momento en que se rompe lo establecido, un algoritmo similar para todos, y se experimenta con las sensaciones de sus usuarios. Tampoco matiza que el conocimiento adquirido a partir de este experimento se pueda aplicar a la publicidad contratada en su interior.

En todo caso, queda la sensación de que gracias a la publicación del estudio se ha conocido este experimento, pero cualquiera podría ser objeto de muchos otros por parte de los analistas de datos de Facebook sin necesidad de avisar. Según sus términos de uso, de manera explícita, al tener un perfil se da permiso a acceder para “operaciones internas, resolución de problemas, análisis de datos, experimentos, investigación y mejoras en el servicio”.


How Covert Agents Infiltrate the Internet to Manipulate, Deceive, and Destroy Reputations – The Intercept

How Covert Agents Infiltrate the Internet to Manipulate, Deceive, and Destroy Reputations – The Intercept.

By 
Featured photo - How Covert Agents Infiltrate the Internet to Manipulate, Deceive, and Destroy ReputationsA page from a GCHQ top secret document prepared by its secretive JTRIG unit

One of the many pressing stories that remains to be told from the Snowden archive is how western intelligence agencies are attempting to manipulate and control online discourse with extreme tactics of deception and reputation-destruction. It’s time to tell a chunk of that story, complete with the relevant documents.

Over the last several weeks, I worked with NBC News to publish a series of articles about “dirty trick” tactics used by GCHQ’s previously secret unit, JTRIG (Joint Threat Research Intelligence Group). These were based on four classified GCHQ documents presented to the NSA and the other three partners in the English-speaking “Five Eyes” alliance. Today, we at the Intercept are publishing another new JTRIG document, in full, entitled “The Art of Deception: Training for Online Covert Operations.”

By publishing these stories one by one, our NBC reporting highlighted some of the key, discrete revelations: the monitoring of YouTube and Blogger, the targeting of Anonymous with the very same DDoS attacks they accuse “hacktivists” of using, the use of “honey traps” (luring people into compromising situations using sex) and destructive viruses. But, here, I want to focus and elaborate on the overarching point revealed by all of these documents: namely, that these agencies are attempting to control, infiltrate, manipulate, and warp online discourse, and in doing so, are compromising the integrity of the internet itself.

Among the core self-identified purposes of JTRIG are two tactics: (1) to inject all sorts of false material onto the internet in order to destroy the reputation of its targets; and (2) to use social sciences and other techniques to manipulate online discourse and activism to generate outcomes it considers desirable. To see how extremist these programs are, just consider the tactics they boast of using to achieve those ends: “false flag operations” (posting material to the internet and falsely attributing it to someone else), fake victim blog posts (pretending to be a victim of the individual whose reputation they want to destroy), and posting “negative information” on various forums. Here is one illustrative list of tactics from the latest GCHQ document we’re publishing today:

Other tactics aimed at individuals are listed here, under the revealing title “discredit a target”:

 


The Internet Ideology: Why We Are Allowed to Hate Silicon Valley – Debatten – FAZ

The Internet Ideology: Why We Are Allowed to Hate Silicon Valley – Debatten – FAZ

 ·  It knows how to talk about tools but is barely capable of talking about social, political, and economic systems that these tools enable and disable, amplify and pacify. Why the “digital debate” leads us astray.

If Ronald Reagan was the first Teflon President, then Silicon Valley is the first Teflon Industry:  no matter how much dirt one throws at it, nothing seems to stick. While “Big Pharma,” “Big Food” and “Big Oil” are derogatory terms used to describe the greediness that reigns supreme in those industries, this is not the case with “Big Data.” This innocent term is never used to refer to the shared agendas of technology companies.  What shared agendas? Aren’t these guys simply improving the world, one line of code at a time?

Let’s re-inject politics and economics into this debate

Do people in Silicon Valley realize the mess that they are dragging us into? I doubt it. The “invisible barbed wire” remains invisible even to its builders. Whoever is building a tool to link MOOCs to biometric identification isn’t much concerned with what this means for our freedoms: “freedom” is not their department, they are just building cool tools for spreading knowledge!

This is where the “digital debate” leads us astray: it knows how to talk about tools but is barely capable of talking about social, political, and economic systems that these tools enable and disable, amplify and pacify.  When these systems are once again brought to the fore of our analysis, the “digital” aspect of such tool-talk becomes extremely boring, for it explains nothing. Deleuze warned of such tool-centrism back in 1990:

“One can of course see how each kind of society corresponds to a particular kind of machine – with simple mechanical machines corresponding to sovereign societies, thermodynamic machines to disciplinary societies, cybernetic machines and computers to control societies. But the machines don’t explain anything, you have to analyze the collective arrangements of which the machines are just one component.”

In the last two decades, our ability to make such connections between machines and “collective arrangements” has all but atrophied. This happened, I suspect, because we’ve presumed that these machines come from “cyberspace,” that they are of the “online” and “digital” world – in other words, that they were bestowed upon us by the gods of “the Internet.” And “the Internet,” as Silicon Valley keeps reminding us, is the future. So to oppose these machines was to oppose the future itself.

Well, this is all bunk: there’s no “cyberspace” and “the digital debate” is just a bunch of sophistries concocted by Silicon Valley that allow its executives to sleep well at night. (It pays well too!) Haven’t we had enough? Our first step should be to rob them of their banal but highly effective language. Our second step should be to rob them of their flawed history. Our third step should be to re-inject politics and economics into this debate. Let’s bury the “digital debate” for good – along with an oversupply of intellectual mediocrity it has produced in the meantime.


NSA and GCHQ: the flawed psychology of government mass surveillance | Chris Chambers | Science | theguardian.com

NSA and GCHQ: the flawed psychology of government mass surveillance | Chris Chambers | Science | theguardian.com.

Research shows that indiscriminate monitoring fosters distrust, conformity and mediocrity

 

John Hurt in a film adaptation of George Orwell's 1984

Big Brother: a government that engages in mass surveillance cannot claim to value innovation, critical thinking or originality. Photograph: Ronald Grant Archive

 

Recent disclosures about the scope of government surveillance are staggering. We now know that the UK’s Tempora program records huge volumes of private communications, including – as standard – our emails, social networking activity, internet histories, and telephone calls. Much of this data is then shared with the US National Security Agency, which operates its own (formerly) clandestine surveillance operation. Similar programs are believed to operate in Russia, China, India, and throughout several European countries.

While pundits have argued vigorously about the merits and drawbacks of such programs, the voice of science has remained relatively quiet. This is despite the fact that science, alone, can lay claim to a wealth of empirical evidence on the psychological effects of surveillance. Studying that evidence leads to a clear conclusion and a warning: indiscriminate intelligence-gathering presents a grave risk to our mental health, productivity, social cohesion, and ultimately our future.