Bigotry, Hate Hoaxes & Congressional Hearings–Futurum Tech Podcast Episode 040

On this edition of the Futurum Tech Podcast, can social media platforms really be expected to identify, assess, and remove harmful content in real-time? The U.S. Congress think so. Uber has itself an IPO. Amazon’s minimum wage shift. Microsoft Word and Google Docs become besties. The FCC announces a $20 billion investment fund in rural 5G deployments, a quick analysis of Amazon Lexus data collection response. Can Disney become the next Netflix? Those stories and more coming up on this episode of FTP.

Our Main Dive

What could possibly go wrong when YouTube live-streams a congressional hearing on White Nationalism? The White Nationalists show up, resulting in a shut-down of user comments.

When racism, bigotry, hatred and other unacceptable behaviors invade the social medium, the tech/social platforms have a responsibility to address the issue. But while tech often enables bad behavior, more tech may not be the whole solution. Could a better “block first ask later” policy to inappropriate online behavior be the key?

Our Fast Five

We dig into the week’s interesting and noteworthy news:

Tech Bites

Amazon’s “Alexa Listens then Amazon Employees Listen” revelation.

Crystal Ball: Future-um Predictions and Guesses

What do we expect in the Disney+ vs Netflix battle?


Fred McClimans: Welcome to this week’s edition of FTP, the Futurum Tech Podcast. I’m your host today, Fred McClimans, joined by my colleagues at Futurum Research, Daniel Newman and Olivier Blanchard. Gentlemen, welcome to the podcast.

Olivier Blanchard: It’s good to be here.

Daniel Newman: Yeah, loving life.

Fred McClimans: That’s not an enthusiastic endorsement of …

Daniel Newman: Come on, man, we’re watching congressional hearings. We’re going to have to do something to turn this up and make it interesting. You promised we’re going to talk about congressional hearings and make it interesting …

Fred McClimans: Yes, we are.

Daniel Newman: … today for all of our listeners today.

Fred McClimans: We are, we are.

Olivier Blanchard: I love C-SPAN 1 and I love C-SPAN 2 even better, true story.

Fred McClimans: Olivier, that’s what we love about you. Before we get into today’s show, we do have a lot of interesting things to talk about today. We will be talking, as Dan mentioned, about a recent congressional hearing gone awry. We’ll also be talking a little bit about what the FCC is doing with 5G these days. We’ll take a quick look at Uber’s IPO, talk about Amazon a little bit because it’s hard not to talk about Amazon. We’re going to take a quick look at what’s going on with Disney and Netflix and Disney’s whole moving to the streaming environment here.

Before we move forward, I do want to remind our listeners that as always, the Futurum Tech Podcast is for informational and entertainment purposes only.

We are not offering any stock advice. You should not take any stock advice from us, even though we will talk about equities on the program here today. With that, gentlemen, it has been a crazy … I’d like to say a crazy day, but it’s been a crazy week, a crazy year, maybe even a crazy decade for tech. We have ourselves in a situation where we’re experiencing the benefits of tech, all the great things that come from tech, our ability to search, our ability to use technologies like machine learning to better understand the data that we’re able to capture to drive faster insights.

We’re able to use technology now for 5G phone calls. We have created with tech this global forum.

It’s not just a global square. It’s a 24/7 pervasive communications media. We’re all hyper connective into it. Anybody can say literally anything they want to say. That’s where we start to get into a little bit of the issues here today. I’m going to play an audio clip here. This is from the congressional hearing held earlier this week that quite literally was a congressional hearing on white nationalism. The hearing itself gets disrupted because of the tech that’s being used to broadcast the hearing so that is part of all the great things that we love about tech and government, gov2.0, transparency, citizen engagement. When we actually put those things out there, it gets abused and you’ll see here in a moment. Here we go. This is from the congressional hearing earlier this week:

Jerry Nadler: Before we go to the next witness, I want to read two paragraphs from Washington Post story that was just posted online. A congressional hearing to explore the spread of white nationalism on social media, meaning this hearing, quickly served to illustrate the problem Silicon Valley faces after anonymous users on YouTube began posting vitriolic attacks that targeted others on the basis of race and religion. The hearing, held by the House Judiciary Committee, was streamed live on the video site owned by Google, which is testified Tuesday.

Alongside the stream, a live chat featured posts from users, some of whom published anti-Semitic screeds and argued that white nationalism is not a form of racism. “These Jews want to destroy all white nations,” wrote … I won’t put in the name. “Anti-hate is a code word for anti-white,” wrote another, et cetera. This just illustrates part of the problem we’re dealing with.

Louie Gohmert: Could that be a hate hoax?

Jerry Nadler: What?

Louie Gohmert: Could that be another hate hoax? Just keep an open mind.

Jerry Nadler: Well, I know is what I just read.

Fred McClimans: Okay, gentlemen. We have a hate hoax on our hands here today at the congressional hearing. For those who are not up on the hate hoax aspect of all this here, there is a persistent meme running around that every time there is some type of an episode interruption of white nationalism or some hate … Whether it’s on the right or whether it’s on the left, any kind of argument that it’s the people themselves trying to make it look like they’re being attacked, sort of the Jussie Smollett syndrome out there. This whole situation, the irony is just so wrong with this.

Here we have YouTube, a Google company, that is broadcasting a live discussion, a discussion that we really need to have about a particular issue. That one issue disrupts that discussion online to the point where it can’t be had. We try and do it, it gets corrupted. We try and do it, it gets corrupted again. I think the important thing to take out of this aside from the congressman’s lack of understanding about how the real world actually works, and how YouTube works, and what actually takes place out there. We have a situation here where everybody expects technology to fix technology. They expect Google to fix the problem with YouTube because it’s their YouTube.

I don’t think that’s really maybe the direction that we should be looking at here today. I think there are some things that you just can’t correct with tech. Another example of this issue came up earlier this week with an article in the Daily Caller. Now, Google, who was testified before congress several times before but not enough in my opinion, has been accused by certain people out on one of the discussion sites of fixing search results, actually not showing search results that are true and honest but search results that are biased by Google’s team. Google has denied this. The way their algorithm’s work, yeah, they’re just not fudging things like that.

They do have a policy about websites that have been banned or have been labeled as not good neighbors in the neighborhood of the web. There is a process that they use where when the site doesn’t meet the good neighbor criteria but they still want that site in the search results. In essence, they’ve been banned but they still want to be searchable, there is a policy where a team at Google goes in and they review and they make sure that this search is appearing or not appearing. That’s not changing anybody’s individual search results in any way, but that has now come to the forefront this week with people accusing Google now of intentionally misleading and fudging search results.

This issue here, this is something we just can’t seem to get away from. We have these issues. We’ve created this global square. We have an issue with human behavior and we’re expecting tech to fix that human behavior. I think that’s the wrong approach. Olivier, this is something I know you and I have talked about offline here. How do we educate people to understand this is not a tech issue, this is a people issue? The people that maybe need to be adjusted might not be Google as much as it might be the people who are using the service.

Olivier Blanchard: Right. Well, it’s a really complex question that we’re not going to be able to answer completely in the allotted time today. I will point out that the main voice, the first voice that you hear in the clip is Jerry Nadler.

The second voice at the very end who asked if all the white nationalist are secretly maybe whatever, Jews trying to make Neo-Nazis look bad by pretending to be Nazis, is Texas representative Louie Gohmert who in my opinion isn’t necessarily the sharpest tool in the shed to begin with.

Fred McClimans: I’m not even sure he’s in the shed.

Olivier Blanchard: Yeah. Not to cast aspersions on his asparagus, that’s inside joke. You can look it up for our listeners who aren’t familiar with that meme. I think the first thing that we need to understand is that the concept of freedom of speech is it’s a little bit convoluted when we start talking about it in terms of legislation and within the context of a congressional hearing. The First Amendment, which guarantees freedom of speech, essentially is a statement that gives the people in the United States the right to speak without being infringed upon by the U.S. government. It does not have anything to do with private companies being able to censor speech if they want to.

I’m not talking about the press necessarily, but a company like Google, a company like Facebook as non-government entities do not have to follow the First Amendment. It doesn’t have anything to do with them. They can decide what goes in their platforms and what doesn’t go on their platforms, what they can and cannot ban. When we talk about freedom of speech, there’s the general abstract concept of should we allow all speech in general as a philosophical question. Then, there’s the legal question relating to the First Amendment specifically. That applies only to the government not to companies like Facebook, or YouTube, or whomever.

We have to start addressing the problem this way first. Now, if we talk about the more abstract philosophical question of freedom of speech and we decide or YouTube, for instance, or Google decides that white nationalism and specifically the more virulent and toxic aspects of it that fall into the category, not just a bad neighbor or not a good neighbor but actual hate speech towards vulnerable and protective groups. If a company like YouTube decides that their platform is not going to be a forum for those types of ideas and those types of discussions, they have every right to do so.

What we’re talking about here really is public pressure on these companies to be healthy public squares where people can come and share ideas and not become toxic cesspools of hatred doxing and really dangerous harmful behaviors. I think that companies have a responsibility to try to create the types of spaces that cater to the general public and not to be become public news. This is not to cause public harm. That’s entirely a choice that they make. I think that ultimately as businesses, their success will live and die based on the decisions that they make and the balance that they’re able to achieve in that regard. I don’t necessarily think that it’s 100% the place of the government to regulate speech on YouTube and Facebook and other platforms. That’s not my personal view. I think I’m trying to take not a middle position but an objective neutral position on the matter here.

Fred McClimans: I’m with you, Olivier. I don’t think that any government has the right to center or impose restrictions on a public square, any more than they do on journalists. It’s just natural right that I think we all have to speak our mind openly. We’re at that point where the push for regulation or the push for … In fact, a different way of putting it here is we have data privacy issues with social media, with companies. We have breaches. We have lapses in security. We have Facebook giving data to somebody who access and leaves it on Amazon. These are all very significant issues that need to be addressed.

This issue of freedom of speech or the ability of a company like YouTube to say, “Look, this is our square.” “We’re Twitter, we can ban who we want, when we want.” Or Facebook, “We can do this.” Those are all very reasonable things. They keep getting mashed up together here. Maybe there’s something that we can do to look at the behavior of people online and focus a little bit of attention there because the behavior that we witness during this hearing was reprehensible. There’s no other way to describe it. Dan, we had some great conversations about the role of technology in society, in business, how it’s transforming our businesses and our lives and some of the challenges we have with AI systems that are coming online with issues of bias or understanding how they’re really thinking inside.

My concern is that if we can’t address issues like this, then the technology that we’re bringing out now in the coming years could be abused horribly.

Daniel Newman: Yeah, I don’t think that’s going to change. Here’s the thing though. The idea of the people self-governing their bigotry and racism, it’s just completely unrealistic. You’re going to get a bunch of people be like, “Oh, I’m going to use this platform better. I’m going to have more respectful civil discourse.” It’s like I said, I can’t even write a blog post that’s controversial anymore without worrying about at least a handful of people coming out and threatening me or telling me how silly or stupid I am. Let alone, people who have deeply seated racism and bigoted thoughts. They look at the internet, and I say they because not everybody.

I have to be careful with generalizations. A lot of people look at the internet, it’s like your keyboard warriors. Look, white nationalism was pervasive before the age of the internet. You put people and you let them hide behind fake screen names and made up accounts on social media and let them say anything they want. Of course, they’re going to do it. The tech companies have to be accountable to this. They have to take a position. We want right now for our companies to take a stand on social issues. We want them to take a stand on social equality of race, of gender. Racism and bigotry has to be on that list.

It’s up to the NFL to really take a stand on how their players are going to address the flag. I don’t mean that in a good or a bad way. I mean but if the NFL holds a position where they do not support the players right, then they are basically saying we support the other side. It’s really that black and white nowadays. If you allow racism to be pervasive on your platform, you are enabling racism. That’s what you are. When you have a voice that big, you are accountable or what you allow that voice to do. When you’re a talk show host, think of it this way. If you were David Letterman in his prime and he decided to consistently bring on and allow that kind of hate speech to happen on his show, he may not have those opinions, they may not have come from his mouth but it was his platform.

All that is exponential. Social platforms, digital platforms are exponential versions of highly influential people now where they and everyone else has a voice if the platforms do not do something about it. Really quickly, let’s talk about what that is. First of all, companies have to have abuse policies and they have to stand by them and stick to them clearly on both side, because I do see it on both sides. What I’m saying is, when it’s extreme and it’s clear, I think you can train, systems can learn. There are certain terms that you just do not stand to be posted on your site.

Like Olivier said, and he was very, very accurate and eloquent in his delivery, as a private corporation, you have no accountability to the people who use your platform to allow them to be racist. That is not free speech as defined in the First Amendment. Free speech, it’s defined is that if I don’t like what my governor is doing or my president is doing, I have the right to articulate that. It doesn’t give me the right to be abusive towards people I do not like on a platform I do not own. I think the companies have all the control over this, whether they want to decide to be stricter or less strict with their platforms and open it up to people to be more hateful or to have more debates that stretch beyond what I would consider reasonable discourse into a state of using hateful, bigoted commentary as a way to articulate their opinion.

They should just turn it off. That’s my opinion. They should it off. They should stop it. They should block them. They should ban them. Unfortunately, they’ll keep coming back because these are like ants in the ground. Do you know what I mean? They’ll go back in the ground. They create a new account. They’ll come back. You know what? At least that these sites, and I’ll end on this thought because I know I’m rumbling on, if these sites had a clear position, they may get some hate from those groups. You know what, those groups always had to go off in the corner and hate amongst each other. Let them do that. Let all these groups into one corner and let them all be angry and hateful in their own corner and get them out of the mainstream, because that’s all they’re doing by allowing them to be.

Olivier Blanchard: I agree. I want to touch on the technical or the technological aspect of this real quick because we’ve talked about the intent and the general kind of morality and good taste of it. We haven’t actually talked about what the tools can and can’t do and what our expectations or how our expectations rather don’t necessarily meets reality right now. One of the points that you made, Fred, initially in the segment was that there’s an expectation that these companies need to regulate themselves, need to do something about it, need to solve these problems. I think we’ve addressed the portion of that that deals with whether or not these platform should allow white nationalism or extreme toxic political or harmful political views to have a forum or to have a place on their platforms.

When content that is harmful, that is deliberately … I want to say malicious makes his way onto the platforms, the expectations that the YouTubes, Twitters, the Facebooks of the world will immediately be able to react to it and block it. First of all, identify it. Second of all, block it, turn it off, remove it from you. New Zealand just recently passed a law holding social media companies account are criminally liable when they don’t remove violent contents, especially that type of violent content from their platforms quickly enough. The issue with that is, first of all, none of these companies have AIs or staff with sufficient capabilities to, first of all, identify this content when it first uploaded.

Then second, be able to assess it and third, be able to remove it at scale. If a bot attack that’s basically just trying to push and publishes many versions of the same video of the same content as possible occurs, you’re talking about thousands of accounts potentially uploading videos or hateful content, not necessarily at the same time but in a very quick and a very short timeframe. These companies are not currently technologically able to identify, assess, and remove all that content quickly. Second, it’s not something that AI is going to be able to tackle anytime soon if perhaps ever. Here’s one of the reasons why.

How do you tell the difference when a piece of content is being shared? For example, a photo of a defaced Jewish cemetery in Poland with swastika spray-painted on the broken grids. You have this image. How does an AI determine the difference between a white nationalist or a Neo-Nazi showing that photo as an example of, “Yay! We’re winning” as opposed to a legitimate journalist with CBS news or CNN or even Fox News for that matter showing it as evidence of a news item? It’s the same image. Context matters and AIs are not going to be able to … Without the help of humans, specifically trained in that endeavor to be able to differentiate between the two. We can’t censor all news that deals with white nationalism or toxic political views because context …

Fred McClimans: Nor should we.

Olivier Blanchard: Right.

Fred McClimans: Exactly. That issue, Olivier, if you rely too much on the tech, you have to step back and say, well, let’s have people do it. We don’t have enough people in the world to ingest all the data …

Olivier Blanchard: Correct.

Fred McClimans: … that’s coming into the internet every day. I’ll put in front of you, those two examples that you gave, you show that to 100 people, there’s probably 30% of them that can’t figure out which one is which. Which is the hate speech and which is the … Hey, here’s an example of something we shouldn’t do speech.

Olivier Blanchard: That’s right. It’s not even about the content, it’s about the intent behind the publishing of the content.

Fred McClimans: Content is everything, yes.

Olivier Blanchard: Right. We’re sitting here having these types of discussion and props to Chairman Nadler for bringing this up. We need to have these discussions. The general expectation from legislators on the one hand and from consumers and voters on the other that all these social media platforms need to clean up their act and fix this and come up with a solution. There is no solution. There is no practical way of doing this in the short timeframe that’s being impost on them. Right now, it’s not graphically feasible. In the meantime, first of all, we have to state this. Second, we have to adjust our expectations. Third, we have to create a roadmap starting today of things that companies actually can do and products and solutions that they can develop in order to be able to do better over time and create this roadmap with milestones. We can’t expect it to be an overnight fix, because it can’t be.

Fred McClimans: Right. I think the important thing in all of this here is to recognize … As you said, there’s a roadmap that we need to have, that we need to create. There needs to be a sense of transparency. Our economy, not just in the U.S. but our global economy, our education system, our political system, it’s all fueled by information, digital information today. Something has to change here because we’re close to that breaking point where the system that we have created that we continue to promote and to use in leverage has become so fundamental to everyday life.

You can’t take it away at this point. It’s like going out and saying, “Hey, you know what, banking online was a bad idea, people are getting hacked. Everybody shut down your mobile apps.”

Olivier Blanchard: That’s not going to work.

Fred McClimans: You can’t get to that point, but we do need to think about it. I think it’s a discussion that everybody should be having right now because if things go any worse in this situation, it starts to disrupt not just the YouTube channels but it disrupts our global economy, and our businesses, and our society. That’s definitely not a good thing.

Daniel Newman: Let me just add one quick thing because I think Olivier is right. I think tech itself will be challenged for machine learning although I do think deep inference will come a long way in terms of being able to decipher who’s showing a picture of a dog versus a dog. That kind of stuff will come a long way. I think a couple other things is credibility of source. Meaning like when Fox News shares a story, they are more credible than some individual person without even a proper photo of themselves in their profile. What I guess I’m saying is there is a lot of variables that could decide when a posting is coming and where it’s coming from as to whether it is more viable or less viable.

Second thing I would say because what you said, Olivier, freedom speech is not a requirement of a private corporation, I’d say lean on the side of caution. Meaning a company could always block more content as opposed to less. I remember posting videos of my daughter’s soccer online. I would have censors use because the music in the background would be the license that they could shot the video down even though it wasn’t even clear enough to articulate, but they would do it out of the state of not wanting to risk any lawsuit for … What I’m saying is, you can actually censor more.

What I’m saying is then you can have something do a second layer, either those are humans or that’s done through a deeper inference algorithm that’s created. You say, “Look, we are initially not posting this but it’s running through our moderation process. If we deem it’s okay, we’ll post it later.” Because nothing you’re posting is so important right now that it has to go up the second. The other thing is like these shootings and stuff. Shot it down immediately, there’s absolutely … When you have these live shooters into places. I seriously sometimes am concerned about the state of technology where they’re all afraid if they don’t have the video that’s going to drive traffic elsewhere. They’re making decisions based upon business which is just disgusting.

Fred McClimans: It’s algorithms making decisions now about …

Daniel Newman: Based on business though.

Fred McClimans: … the abdicated responsibility.

Daniel Newman: Yeah, but that’s what I’m saying, is that corporations have to take a stand. This is where I actually say we give a lot of control to the corporations, these corporations have massive platforms. They need to respect those platforms and realize that there is a long, not to talk stock, right? There is a long side of these decisions that they’re making. If they enabled this and allow this, they will be judged as companies who were contributing to the demise of society.

Olivier Blanchard: Right. One last point, Fred, before we go, before I let you have the last word or let yourself have the last word. The discussion that we’ve had has focused so much on white nationalism, and racism, and hate speech. The same principles apply to all of the fake … I’m not talking about fake news in terms of “fake news” but false information, hoaxes, like the anti-vaccination thing or like anti …

Fred McClimans: Planner.

Olivier Blanchard: … 5G stuff that has real world consequences. You’re talking about right now, there’s an epidemic of measles in New York. New York has declared a state of emergency because so many people have not vaccinated their kids. Meanwhile, Twitter, Facebook, YouTube are still the breeding grounds for the type of misinformation that literally kills people. Let’s go ahead and add the responsibility of these platforms to also screen for this type of stuff, not just the hate speech but also this fake medical information, this fake political information, all of the negative malicious propaganda that’s out there trying to cause harm and so mayhem in our general community in the public square.

Fred McClimans: Yeah. I’m with you on that, Olivier. I don’t believe that there are two sides to every conversation.

Olivier Blanchard: That’s right.

Fred McClimans: It’s not a vacs or anti-vacs. It’s not a left or a right. There are so many sides wrapped in here.

Yeah. We do need to take responsibility though. I think we as individuals, we as businesses, we as citizens of the world do need to take some responsibility here and say, “Look, there are certain things that just should never be shown online.” We talk about this a lot in our business practices. Just because you can with tech doesn’t mean you should with tech. Just because we can give everybody a microphone and let them rant on a podcast somewhere like … I don’t know, here? It doesn’t mean that we necessarily should do that. It’s not always a great idea. This starts I think going back to the Google phrase, it’s good neighbors. We all need to be good neighbors.

We need to be proactive neighbors out there. You see something, say something. This is the future of our business, our society, our children’s lives that we’re all talking about here. Now, I am done talking about that. We’re going to move on to the next topic which is the FCC and the United States government. How did we end up back here again?

Olivier Blanchard: Well, you know …

Fred McClimans: Olivier, hey, take us into our best five. We’re going to do a real quick best five for our listeners today. We have gone a little bit over because this is a passionate topic here. We’re going to pull back in. We’re going to do 45-second on the clock best fives. Olivier, the clock starts with you. The FCC $20 billion in 5G, what’s going on?

Olivier Blanchard: Yeah, okay. It’s a dual bit of news. The FCC plans a December auction for 5G bandwidth, which is interesting in and of itself. It also announced this week a $20 billion fund for rural internet investment. That’s really good news because on the one hand, it’s going to bring 5G to rural areas a lot faster than it normally would have without this financial boost. That’s good for consumers. It’s also good for industry. Because as you well know, 5G is going to drive the industrial IOT. A lot of manufacturing plants are not downtown in metropolitan areas. They’re out in the country a little bit or off like outside of the main service area that the Horizons and AT&Ts of the world will initially invest in. This will help boost adoption of 5G of our industry as well as consumers. It’s really nice to see the government invest $20 billion into this.

Fred McClimans: A very, very good cause for that. Dan, last month we had Lyft that took … Lyft is now lefted, I guess, or laughing in the water as Uber is set to knock themselves off the IPO bench. What’s going on with Uber? What …

Daniel Newman: Yeah, I just want to say we were pretty spot on with our initial analysis on Lyft. It’s looking like Uber is going to file a suit in May, filing their S1 to go public on the New York stock exchange, estimated IPO to happen in May. Couple of quick stats, 11 billion in revenue, just over 11 billion in 2018, posted and adjusted even a loss of 1.85 billion, still losing money. Their monthly active users is growing at 91 million in the fourth quarter, 1.49 billion trips in that same fourth quarter. Overall, I think you’re going to see a similar yet probably a much more celebrated IPO because Uber is definitely the Uber of ride share.

I don’t even know if that makes sense, but they’ve become the benchmark. This maybe one of the biggest IPOs you’ll hear about or remember in a long, long time. This isn’t a stock advice show but I can promise you, I will be standing on the sidelines. Because when a company is losing $2 billion a year, it just can’t get there.

Fred McClimans: No, no, you can’t especially when they don’t have any viable plan to actually get to profitability in any meaningful way. It’s a real tough situation. Uber is into so many things. What I would suggest, and we’ll drop this into the show notes, but I came across a couple of really good links that talk about how somebody took AI tech, crunched Uber’s S1, and extracted all the things that were unique from other filers. It turns out that Uber has some really interesting things that’s very concerned about that you may not have thought about moving forward that I at least not thought about. We’re going to move into my section of the best five. Dan, thank you for that on Uber.

Hey, by the way, I had three Lyft or Uber rides in the last three, four days. Each one of the drivers was also a Lyft driver and they vastly prefer working for Lyft over Uber. Totally anecdotal, not analysis but there it is. Amazon, we got to talk about Amazon today. Amazon in their Amazon Go technology. I got to say, I like the technology that Amazon has. I like the model. I think that the ability to walk into a store, any type of store, doesn’t need to be a grocery store but it’s some location, pick something off the shelf, have that recorded accurately, and have you charge or at least debited for that in some way.

Maybe it’s not cash, maybe it’s pat on the back, maybe it’s a note later come back and pay me, but that whole idea of tech in that kind of a store retail or even a warehouse environment I think is great and it’s cashless. However, Amazon just announced that, yeah, it looks like they’re going to start taking Greenbacks. Amazon Go goes green moving forward here in some locations. They haven’t revealed the plans. What’s behind this? Well, I think two things. First off, if people want to pay with cash and they don’t have a credit card or what not, take the cash, no reason not to.

Secondly, Philadelphia is right on the cusp of passing legislation that actually makes it illegal to have a cashless store. New York probably going to file a suit. San Francisco, not far behind I expect. It’s a big issue. We got a great tech. Now, it’s going to be great tech in a hybrid tech model which is the way all good tech should be. Olivier, back to you. Good news for Chrome users. What’s going on?

Olivier Blanchard: Yes. Oh my goodness, I’m so excited by this. I’m not an Apple user computer wise. I use PCs. Typically, I use Microsoft Word a lot and actually even Excel and PowerPoint quite a bit. I recently for travel purposes, bought a smaller laptop that happens to be a Pixel book. It’s a Chromebook. I have discovered, much to my dismay, that trying to work collaboratively on Word Docs and especially with editors who send you corrections is extremely difficult to manage on a Chromebook because Microsoft Word and Google Docs don’t necessarily click super well with each other especially when it comes to making edits or responding to edits, et cetera.

It’s been a little bit of a headache. It’s been very frustrating. I’m sure I’m not the only one person out there feel this way to work in Microsoft Word on a Chromebook. Well, earlier this week, Alphabet has announced that Google Docs will now soon, very soon let you natively edit Microsoft Word, Excel, and PowerPoint files on their machines. That’s really good. The supported file types according to Google are going to be .doc, .docx, .dot basically pretty much every Excel files and pretty much every PowerPoint file as well. Working in Microsoft with Microsoft tools on a Chromebook is about to become much, much easier and much less of a headache. Kudos to that.

Fred McClimans: I love it. Definitely in my future, at least in the future for my kids. Dan, we got to talk more Amazon. What’s going on?

Daniel Newman: I know. It’s like the Amazon show. They have a lot of news. This is actually following our social good with technology, social good by maybe limiting companies contributions to negative and inaccurate information which, Olivier, by the way, I want to say great ad at the end because it really can be anything. It’s misinformed and incorrect. Jeff Bezos is basically challenging the retail industry to push minimum wage to $15 by 2020. I’m very hard on Amazon, and Apple, and some other companies. I feel like when a company does something good, it’s extra important that we take the time to give them that shout for it. I like what Bezos is doing.

We know that on the current wages, the $11 Walmart pays, it’s really not livable almost anywhere in America at this point. With job competition that are extraordinarily high levels and cited unemployment rates at under 4%, I think wanting to bring in talent and bring people into the workforce and hopefully pay them better, provide them better benefits so that they can grow and people can have a chance to continue to contribute to society especially in the age of human machine partnerships, which brings me to my best six. Olivier and I finished the manuscript for Human Machine.

Fred McClimans: Yeah!

Daniel Newman: I don’t think you can preorder it yet, but we’re going to put the link in the show note. We’re going to have to show all about it because there’s nothing we like to do more than tout our own success as, “Wait, that’s not our show.” All right, never mind.

Fred McClimans: It is now.

Daniel Newman: It is now. I was totally kidding. That was an end to the plug. Go Amazon, good stuff. Jeff Bezos, keep challenging. Companies like Target who are raising the bar and meeting the challenge. As you said, why not go 16, why not go 17. Let’s get more workers living wages so that we can help society from every side, not a socialist view, just thinking it’s good for society.

Fred McClimans: Yeah. I think it’s a great move for a number of reasons, especially since Amazon has been dinged a lot from a PR perspective about taxes, how much they pay, how much they don’t pay, should they collect, don’t they collect. There’s just so much, Olivier, to your point, misinformation about that whole topic out there. I think this is a good news that the richest man in the world can afford to take at this point in time. Tech Bytes, and I hate to say it guys, we’re going to do a real quick Tech Bytes here and it’s all about Amazon. It came out this week that Amazon who has had issues in the past with their Alexa product, the Dots and the Echoes, recording and misinterpreting or perhaps even sharing information that it records or hears within a person’s house.

Amazon is actually taking snippets of what people say and passing them off so people can listen to them. Now, I know that sounds kind of odd, kind of out there. Bear with me. It makes total sense. Because Amazon is all about taking everything they hear with Alexa, refining Alexa so they can offer a better speech recognition performance so that they can build an AI bot that actually communicates with you with understanding, with context in all of this. We don’t have the AI technology that can do that today. It has to be people who prefilter and clean up and then feed that back into the AI engines to learn.

Amazon insists that people are not hearing complete conversations. They’re hearing snippets. It’s very controlled. There’s no way anybody can link back. In fact, in one of the articles I read, they talked about how somebody within this team thought they heard a crime taking place in here. They were told, “We can’t do anything about that. We don’t know who that person is. We can’t reach back. That’s just the level of firewall or separation that we have with us here.” I will say and would love to get your two quick thoughts on this. The fact that Amazon had not disclosed this previously, come on, guys. Really?

Olivier Blanchard: So …

Daniel Newman: Yeah, listen … You want this? I’ll let you go.

Olivier Blanchard: Actually, I was going to reference something that you posted on Twitter, Dan. You posted this quote from an Amazon spokesperson. The quote was, “All information is treated with high confidentiality and we use multi-factor authentication.” I just wanted to briefly comment on that because …

Fred McClimans: That means nothing.

Olivier Blanchard: That means nothing. It’s nice. It’s good. Oh, great, so you were spying on me and you’re using multi-factor authentication and you’re going to treat the stuff that you overheard inside my house with the outmost confidentiality. That doesn’t help me but more to the point, confidentiality implies that one party has knowingly confided in the other. Amazon is no one’s confidant. They’re not my attorney or my doctor, nor is any entity that listens to your conversation without your explicit knowledge a confidant. I think that the use of the term confidentiality by Amazon in this case is absolutely wrong. This is not the proper way of addressing this breach of trust.

Daniel Newman: There’s a couple of things that I want to add to that thought. First of all, Amazon’s position is all the snippets captured were after the wakeup word, after the Amazon was woke per se. Second thing is Amazon wants everybody to know is that this is typical behavior of companies in this process of training AI. Apple has been doing it with Siri for a long time. Google is doing this as well from what I was reading. Here’s my thought, very short and simple. They need to improve communication of how the data information being collected and being utilized.

Listen, there are plenty of people willing to trade experiences for privacy and plenty of people that if you said, “Hey, we’re going to send you an Echo, you can use it, it’s in your house, enjoy it but here’s what’s going to happen. We’re going to occasionally capture after you wake up what you’re saying because we want to train our products to do better. It’s like the same thing as you’ve gotten for years with reporting an error when your browser crashes or when your app crashes. We want to help your experience get better.” A lot of people might not want that, but I’m sure …

Because the other thing Amazon said is it’s an extremely small amount out of the data that they’re using for training.

If that’s true, then why in the world do you have to do it randomly with people who have not opted in when there’s a whole society of people that would be happy to give their data in exchange for maybe a free product or an improved experience. That’s what probably irks me the most about all these companies, is the continued abuse of people knowing this. People don’t know, they don’t read privacy policies. Look, the lawyers that write privacy policies for companies building applications know this. They know this. They chuck and full of stuff that people wouldn’t understand even if they did read it. That’s because it’s all legalese. It’s designed to be this way.

You have an audience, Amazon. You have an audience, Apple. You have people that would die to opt-in to give you information in exchange for just a simple little bit of acknowledgement, or a free product, or a couple of power cables, lightning cables since they break every 15 minutes. Point is, where they’re wrong is not what they’re doing.

This is part of what’s required to do training. What’s wrong is that you have a willing community of people who would help you do this on an opt-in basis. Instead, you continue to manipulate and violate people’s trust. That’s why these companies are going to continue to put themselves at risk until they get on the page of letting people know what they’re doing with everything.

Fred McClimans: Dan, I am an avid user of Beta software, always have them. Just back to my original early tech days fumbling around with code, my iPhone, my other tablet, all Beta software. I don’t get compensated for it. I see the stuff that’s coming out first. I like that. I like giving back in and saying, hey, look, if I have an issue with this, let me send an automatic note back to Apple or whomever so that they can fix that issue so it’s better for everybody else. I’m not even sure you need to pay people, just give them a little pin that says, “I’m an Amazon Echo Alexa beta tester.” It’s that easy. You guys are both right. This is something that … Come on. It’s a cultural issue.

They’re just not thinking about the benefit and the relationship. Maybe they’re missing a great opportunity here to build some really good brand ambassadors or technology to get them through some of those tough issues. That’s it.

This week, we talked about Amazon I think four times now. They are our Tech Bytes winner of the week. I guess they’re four for one or one for four, four for four maybe. Before we go today, our crystal ball prediction, guys. We got to drop one off here. Netflix had a horrible day in the market after Disney announced their new streaming channel, their new content package, their deals with all the big major studios to drop in and to do it right about the same time for $6.99 a month where Netflix is now sending out email notifications to everybody saying, “Hey, by the way, that basic package you have, it’s going to be $15 or $16 a month now,” which is ridiculous.

Guys, your quick take on this. Disney is jumping in, Netflix is in trench. You have Apple over there. You’ve got AT&T. You’ve got HBO. You’ve got Hulu. Everybody out there competing in this space. Does Disney stand a chance so that in two years from now, we all go back and go, man, they just crashed Netflix? What are the odds of that happening, Dan?

Daniel Newman: I think the odds are … I’m weighing them in as pretty low at this point because like I said, they’re not even a fast follower at this point. They’re a slow follower. They do have a great content. They can definitely use their content ecosystem to limit others. I would think that would be an issue from their licensing standpoint because they would lose a lot of revenue in order to try to capture market. They’ll get some market share. I think they could maybe see like a Hulu level of success. I would be very surprised to see them pass up Netflix. I think Netflix has done a great job of building an identity and a brand. They are the de facto streaming product of choice right now, not to say it will always be that way but Disney would really have to do a great job. I’m not any more bullish on Disney than I am on Apple to pull this off.

Fred McClimans: Me either. Olivier, what’s your thought?

Olivier Blanchard: I’m going to disagree a little bit. I want to be cynical about Disney. Actually, I agree with everything that Dan just said about it being a slow follower. However, I think that Disney is dangerous for Netflix and for everybody. They are the triple threat. On the one hand, they have the multigenerational appeal of Disney and pretty much the entire child to adolescent ecosystem. I’m not just talking about new shows. I’m talking about the nostalgia aspect of it with all of Disney and Pixar catalog of products and content. On top of that, they have Star Wars. I believe they also have the rights now to all the Marvel stuff as well.

If anybody can actually be really successful in this space, it’s definitely them. I see that Disney also has a package that includes sports. Was it the ESPN or something? I can’t remember what it was exactly, in a way that I think Apple’s streaming offering is not going to be able to compete with. I don’t think it’s going to dethrone Netflix. I think that people will be more likely to have both Netflix and Disney at the top of their list of streaming services than Netflix and Apple or whatever other combination like Hulu or even Amazon Prime rather. I think I’m bullish on Disney’s chances here. I think they have a good package and a good strategy.

Fred McClimans: Maybe Apple becomes the loser here in this battle. I’ve got to say, there’s some good content on Netflix. I’m not a huge Netflix watcher. I had to step back from that binge process. I’ll put the difference this way. Netflix puts out a show and everybody raves about it. They binge watch for a weekend. Disney puts out a show, everybody raves about it. They watch it for a weekend. Then, the kids go out and buy the lunchbox, and the toys, and the pajamas, and this and that, and everything else. The ecosystem capability there that Disney brings to the table is just huge.

I have my doubts whether they can replicate the content strategy and build that relationship and stickiness that Netflix has. At the same time, I think the relationship that a lot of people have with Netflix isn’t as sticky as Netflix would like to think it is. With that, gentlemen, thank you for your crystal ball predictions. We’re going to wrap this week’s edition of FTP, the Futurum Tech Podcast. Thank you to all of our listeners. We love having you here. Let us know what you’re thinking. Whenever we hear back from you, it’s great, it’s learning, it’s fuel for us. If you haven’t yet, go ahead and hit the subscribe button, hit the like button, share us with your friends. Let us know what you’d like to hear us talk about for you on the next edition of FTP, the Futurum Tech Podcast. Dan, Olivier, guys, have a great weekend. I am out of here.

There will be plenty of more tech topics and tech conversations right here on the FTP, Futurum Tech Podcast. Please be sure to subscribe to us on iTunes.  Join us, become part of our community. We would love to hear from you. Check us out at We’ll see you later.

Disclaimer: The Futurum Tech Podcast is for information and entertainment purposes only. Over the course of this podcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.

Author Information

Fred is an experienced analyst and advisor, bringing over 30 years of knowledge and expertise in the digital and technology markets.

Prior to joining The Futurum Group, Fred worked with Samadhi Partners, launching the Digital Trust practice at HfS Research, Current Analysis, Decisys, and the Aurelian Group. He has also worked at both Gartner, E&Y, Newbridge Networks’ Advanced Technology Group (now Alcatel) and DTECH LABS (now part of Cubic Corporation).

Fred studied engineering and music at Syracuse University. A frequent author and speaker, Fred has served as a guest lecturer at the George Mason University School of Business (Porter: Information Systems and Operations Management), keynoted the Colombian Associación Nacional De Empressarios Sourcing Summit, served as an executive committee member of the Intellifest International Conference on Reasoning (AI) Technologies, and has spoken at #SxSW on trust in the digital economy.

His analysis and commentary have appeared through venues such as Cheddar TV, Adotas, CNN, Social Media Today, Seeking Alpha, Talk Markets, and Network World (IDG).


Latest Insights:

On this episode of The Six Five – On The Road, hosts Daniel Newman and Patrick Moorhead welcome Intel’s Jim Johnson and Acer’s Jerry Kao for a conversation on Intel’s largest client architectural shift in 40 years and what makes this shift so exciting for the future of AI-ready PCs.
The Six Five Team discusses the HP Foldable 17" PC.