On this episode of FTP, Futurum Tech Podcast, we’re going to talk a little bit about Russia and memes and data privacy, always an interesting topic. We’ll talk about about Huawei and their move to or not to Android operating systems. We’ll talk about deepfakes, we’ll talk about Twitter and what they’re doing in social media. We’ll also touch on Amazon and their ability or their interest in buying your data. And we’ll see where Intel and AMD are sitting around today. And then we’ll talk a little bit more about data privacy and geolocation.
Our Main Dive
The recent “Russia is stealing your photos” meme highlights both the fear of data privacy (or just Russians perhaps) and the truth around most app Terms of Service (which nobody ever really reads, do they?). The real issue here isn’t Russia (as memes and certain fear-driven news networks might suggest) but the granting of “perpetual, royalty-free” rights to data that we share with *many* popular apps, from “social quizzes” to “here’s what I’ll look like in 10 years if I ever go missing and you need a picture for the milk carton” craze. And it’s not just FaceApp, it’s *most* apps. And that poses a challenge for both consumers and brands as they grapple with the future of data privacy (and the economy that a laissez-faire, caveat emptor posture has created).
Here’s a great (or not-so-great) example: “You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you.”
By using this app you are by default giving up all rights to your data (and in this case your image, likeness, face). And they can use that data (and your likeness) any way they wish. Including to modify how you look. And they don’t ever have to tell you what they’ve done. Oh my…
How can we fix this issue? To start, users need to stop using apps they just don’t need, and ease up on dubious quizzes and surveys that reveal almost nothing about ourselves except that we like to take meaningless surveys and quizzes. But we also need some responsible legislation, and the California Consumer Privacy Act just might be the ticket.
Our Fast Five
We dig into this week’s interesting and noteworthy news:
- Huawei phones are likely to still be Androids for a long time, at least according to Huawei execs. We’re not sure what that portends for the users of Huawei phones or if Google and the US government intend to continue to put Huawei in the cross-hairs of a tech embargo. But we do know the ongoing US-China trade war isn’t likely to stop any time soon and that the US government isn’t always firing at the right target.
- The very real dangers of deep fake videos go well beyond their initial use to create fake porn. These new AI-infused videos can influence stock prices, elections, careers, and more. And now they can be made using a single image and a technique called one-shot learning.
- Twitter is testing new in-tweet labels for how replies, authors and commentators are identified. Yes, there is a problem that needs to be fixed. No, we’re not sure this is the right approach.
- Amazon says “For $10, let us track you all over the web” and consumers reply “Yay, let’s do it!” Futurum’s analysts just shake their heads at this one.
- Intel falls behind AMD in an interesting twist in the battle for nanometer bragging rights that highlights how Moore’s Law just might be circumvented as chip manufacturers finally figure out how to bake the silicon equivalent of a seven-layer cake.
Tech Bites
This week we give a much-needed smack to companies that geo-track to excess and those that abuse the technology. In this case that includes Steve Bannon and the GOP who thought the idea of using geolocation data from mobile phones to identify and track voters who visited houses of worship would be a great way to influence an election.
Crystal Ball: Future-um Predictions and Guesses
Will the California Consumer Privacy Act actually result in better security for the country? We’re hopeful, but far from certain on this one.
Transcript:
Fred McClimans: Welcome to this edition of FTP, the Futurum Tech Podcast. I’m your host, Fred McClimans, joined this week by my cohost Olivier Blanchard and a sit-in host, Shelly Kramer, also of Futurum Research. Shelly, Olivier, welcome to this week’s podcast.
Shelly Kramer: Thanks Fred.
Olivier Blanchard: It’s good to be here. Welcoming Shelly to the podcast, finally.
Fred McClimans: Yes.
Olivier Blanchard: Finally.
Fred McClimans: For our regular listeners, Dan Newman, our other cohost is usually sitting in the virtual chair opposite from me and Olivier. Dan is on holiday in Europe and this week, we have brought Shelly in to lend some fresh insights into our conversation. So, we do have a busy show today. We’re going to talk a little bit about Russia and memes and data privacy, always an interesting topic. We’ll talk about this a bit as well about Huawei and their move to or not to Android operating systems. We’ll talk about deepfakes, we’ll talk about Twitter and what they’re doing in social media. We’ll also touch on Amazon and their ability or their interest in buying your data. And we’ll see where Intel and AMD are sitting around today. And then we’ll talk a little bit more about data privacy and geolocation.
But before we get to that, I do need to remind everybody that the Futurum Tech Podcast is for information and entertainment purposes only. Over the course of this podcast, we will talk about several companies that are publicly traded and we may even reference that fact and their equity share price. But please do not take anything that we say as any type of recommendation as to what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.
So with that, Shelly, Olivier, this past week, we saw another wave of data protection, data privacy, everybody beware memes floating around the inter webs. This one relating to an app called FaceApp, which has the ability to, using some really actually interesting artificial intelligence, and there are many people who are saying, the purpose of this app and others is to actually gather information about people to train AI systems to age people. But in this case, FaceApp allows you to take a picture of yourself and literally age yourself forward in time, 5, 10, 15, 20 years. And then presents a very realistic looking photograph back. You’ve got the changes to the skin texture, the hair, the graying and so forth. Phenomenal app that people are bouncing all over the place.
However, it came out this week that the company behind this app is from Russia. And that along with some interesting terms of service information that they have prompted this meme floating around where even myself and my son brought me a message he had received from a relative saying, dad, look at this, it says, Russia is stealing your data, delete FaceApp now, along with a couple of links to some conspiracy sites.
It’s an interesting situation that we face here and I think that the issue from my perspective, it isn’t that this app comes from Russia, there are a lot of great software apps that come out of Russia. In fact, Telegram, one of the secure messaging apps that I use almost every day is built by Russian developers. That’s fine. For me, and what I’d kind of like to dive into a little bit here are the issues surrounding the terms of service, not just with FaceApp, but with other organizations out there when we agree to use an app. And given the way things go viral, it’s so easy for people to just say, oh, here’s a great app, here, install it, load it, use it.
But let me just read this to you here. In the terms of service for FaceApp, it says, “You grant FaceApp a perpetual, irrevocable, non-exclusive, royalty free, worldwide, fully paid, transferable sublicensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your user content and any name, username or likeness provided in connection with your user content in all media formats and channels, now known or later developed without compensation to you.”
Shelly Kramer: Actually, and what they left off is without compensation and notification of any kind whatsoever, in any way, shape or form to you.
Fred McClimans: Yes. So, you know, this kind of thing Shelly, it naturally gravitates to the darker side of my brain that says what could they possibly be doing here. And FaceApp is a bit unique because like some apps out there, it actually does bring your data into its cloud or into the cloud, into its own servers, where it performs its magic and manipulation that doesn’t actually do everything on your phone. But Shelly, you wrote about this this week. How much of an issue is this? And from your perspective, what really is the real issue here in what’s taking place?
Shelly Kramer: I think the real issue is consumer privacy in general and our collective. We’re just so dumb sometimes and so easily manipulated. I’ve seen people using this app, and I think some of it is, some of its attention. You know, I saw people using it and sharing it on Facebook yesterday in fact. And so, you’re a 30 year old woman, you do this aging and then 50 of your friends are like, “oh my God, you’re so beautiful, you’re going to be so beautiful when you’re,” I mean, it’s just so, so some of it is a cry for attention or the latest new thing or whatever. But it’s just like, you know, this digital life, your digital life app that created so many problems for Facebook and potentially led to the Cambridge Analytica situation that may or may not have affected US elections.
Whenever you give somebody access to your data in any way, it’s dangerous to you. And I think that when you give somebody your face and that likeness is 100% being used to train facial recognition databases. And it just, it doesn’t make any sense. I realize that the ordinary average person doesn’t sit around thinking about these things or talking about these things like we bunch of tech nerds do. But I think that we need to be more aware of this. And I think that we need to back up a little bit in terms of what we do and what we share and what we put out there.
Fred McClimans: There’s a couple of phrases that for whatever reason, we just kind of naturally use or overuse here on the podcast. But one of them is, if you’re not paying for the product, you are the product. And this FaceApp like many others, if it’s free, you are giving something in return for that. Most people don’t think about what that is, it’s data 99.999% of the time.
What’s interesting here too with FaceApp, you know, you brought up an interesting point there, Shelly, about just the very nature of this app and how it might be different from some others in that it does involve a photograph. Now in that photograph, there are different types of data, you know, timestamps, geolocation tagging and so forth that can be embedded into the photograph. Information such as where the picture was taken, what type of device it was taken on, etc.
But you’re also giving them your face and you’re giving them a way to track your physical representation across other sites. So, when we talk about the use of images to train AI systems and data, we very often kind of separate the two. But bringing those two together into this app, you have a device that could very literally at some point in the future allow this company and its partners to actually just look at photographs and track you throughout your life across the web from various location to location. And at some point, if you go missing, potentially, say, hey guys, we can tell you what that person’s going to look like in five or 10 years on this. Which kind of gets back to a larger issue of the whole surveillance society that we exist in.
Olivier, I’d love to get your thoughts on this, but also that surveillance aspect. We talked about the pervasiveness of cameras and images and apps like this in the past. Is this something that most brands should be thinking of, most enterprises should be thinking of in terms of how far they’re willing to go with some of these technologies as well?
Olivier Blanchard: You know, I don’t think it’s really a brand thing. I don’t think that anybody is going to kind of self-regulate all that well when it comes to this. So, let me backtrack a little bit. I think that, whenever we run to apps like this, or whether it’s an app or some other kind of software, they’re basically, there’re mousetraps. And there’s two levels to the mousetrap. The first level is it’s going to capture some kind of bit of data that is on the surface that you’re aware that you’re submitting and that it can use for whatever reasons, nefarious or not.
In this particular case with FaceApp, it’s pictures of your face and quite possibly, many pictures of your face so that they can start building a library of your facial features that at some point becomes kind of, you know, goes from 2D to 3D so that can improve identification and kind of build on databases of people’s faces around the world.
Fred McClimans: Now, Olivier, I also want to mention it’s not just your face, it’s also the background behind you.
Olivier Blanchard: Right. Okay. So that’s what I was getting to. So you have the surface one, and if we go back to 2016 during the election since that was brought up, the apps then weren’t so much about faces, it was about taking surveys. It was kind of like, which democrats you associate the most with, which republicans you associate the most with. And it was kind of like this who is closest to Bernie Sanders as opposed to Hillary Clinton or whatever.
And so, the data that you were submitting, whether it’s a survey or your face is kind of, there’s implicit knowledge that you’re submitting this information to someone even though you may not realize where it’s going. And I think that we’ve grown in our understanding that when we submit this kind of data, it’s possibly being used by someone in ways that we hadn’t anticipated or it could be at some point.
But then there’s the second layer of the mousetrap, which is what you just talked about, the other layer of underground data that it’s able to collect because it’s an app or because you’ve signed in through Facebook or some other kind of like login scheme, where now it can access all of your pictures on your phone. It can access your microphone, it can access your friends list, it can access your email, your texts. And that is the part that’s far less obvious and that most people aren’t necessarily aware of.
So, now that we’ve kind of framed this, and you asked me about brands and companies and maybe even municipalities and government agencies making use of this data, I think that we’re starting to see kind of a segmentation of certain cities, certain police departments, certain agencies and certain tech companies saying we will not participate in this. We’re not going to allow facial recognition tech in these particular areas or to be used in this particular way.
And then you have other areas, geographic, corporate, governmental, that are saying, yeah, no problem. We’re going to use it because it’s national security or it helps our customers or increase, improves in store experiences, whatever. And you’re not going to have a cohesive answer to this problem unless you have an overarching legal framework that kind of guides what you can and cannot do and explains why.
And I think, again, we keep circling back to this every few podcasts, every few episodes, but this highlights the need for some kind of government regulation. I’m sorry for offending libertarian ears if you’re listening to us.
Fred McClimans: I’ll forgive you for them.
Olivier Blanchard: That’s fine. But there has to be, and I’m not talking about this banning all this stuff, I’m talking about creating a logical framework that balances civil rights and the needs of voters and of the people with the value that this can kind of bring. And we have to have an honest discussion and an honest framework that essentially pits the risks against the opportunities and the value.
And we’re not having this discussion right now, there’s no one leading this conversation right now. And so what we have this kind of a wild west of data collection and privacy intrusion and your data being sold left and right to whomever. And more importantly, consumers, users of technologies like us, not necessarily understanding, not having full disclosure available to us as to how our data is going to be used, why, by whom, when and where. I think that’s the first thing that needs to change. We need to have clear visibility and more control over that data.
Fred McClimans: Yeah. So let me bring this back to brands for a second. And Shelly, I can hear you and you’ve got something to say and we definitely want to hear it. But from a brand perspective, we have a whole new digital economy that is built around the idea that data is free. It’s fueling business model after business model after business model. FaceApp is not the only company out there that has this type of aging app. As we mentioned earlier in the podcast here, Facebook, similar app. Others, you can get, pretty quickly, I would expect that just about any camera app has a filter for aging built onto it here.
But think about the way that brands today ask for in many cases and get pictures of their fans, their followers, their advocates, their ambassadors out there. Even something as simple on the Facebook. And hey, I’ll be the first to raise my hand here, back a few years ago when I was doing some work with one of the entertainment sports networks out there, we set up programs to expressly ask people, hey, send us a picture of this on social media, and they did.
So, there’s a lot of touch points here that kind of get into a very murky area with regard to what you can do. And certainly I know there are rules and regulations around sweepstakes and giveaways and special programs like that. But just the fact that brands do ask for and they expect people to share pictures of them and their product on Snapchat, on Instagram, on Facebook, there’s an issue there, it relates I think directly to this. Shelly?
Shelly Kramer: No, I actually agree, and I’m thinking about as you were talking, the tech conferences that I’ve been to in the last few years. And it used to be, five years ago, you’d go to a conference and you’d be listening to a keynote or something and you’d be shaking your head going, these people are like, okay, so this president of the company isn’t even using Twitter or they’re not using social in any way as a part of this conference. And now, every tech conference, part of every message from the stage is tweet it up, share a picture, you know, I mean, they’re doing that. So I think that there, that’s a really interesting point that you raised in terms of what brands obligations are with regard to user created content that is around their brand, their products, their events, that kind of thing. And that’s probably a legal issue as well as anything.
But I do think that one thing that bears mentioning is the California Consumer Privacy Act. And this act is going to go into effect in January of 2020. And it involves privacy rights and consumer protections for residents of the state of California. So how does that affect us in the US as a whole? California is just one state. Well, first of all, California is a really big state. And I think that what we’re seeing, this is going to be the US version of GDPR. And we have brands that are already wrestling with getting their heads around GDPR, and we’re seeing some very, very large fines imposed in the UK by global companies.
And this California Privacy Act is something that nobody’s really talking about. I’m not sure many people are paying attention to it, but it’s going to affect many of us in very big ways. And I think it’s also the beginning of other states enacting legislation like this, taking steps to protect consumer privacy.
Fred McClimans: Yeah, that’s a great point about the Consumer Privacy Act. And I think you’re right. I hadn’t really thought about this, but the same impact that GDPR had globally. I mean, it was and is an EU regulation. But it bled over into every business that does business there and became sort of a de facto here’s what we need to do in other regions as a model. And the Consumer Privacy Act in California, I could very easily see that as well. And I’d have to look into it a bit deeper to see if it actually covers companies that are outside of California in dealing with California residents.
Shelly Kramer: I believe it does.
Fred McClimans: That would be an interesting twist. So, by that nature there, if you have a business in New York, a business in Nevada, a business in Asia, and you’re doing online business with somebody in California, they could try and hold you to that much like the US tries to hold international sanctions and so forth against everybody.
Shelly Kramer: Well, I think it’s going to be very similar to GDPR. Think about it this way. The vast majority of our clients are global companies. So, they are very much impacted by GDPR regulations. So I think the same is going to be true of California. I mean, I think it’s going to have sweeping effects across every sector. And I think that it’s going to be, again, I think it’s going to be the beginning of other states realizing that they need to take more steps to protect privacy, even if consumers don’t even really realize how important it is to protect their own privacy because they’re doing stupid FaceApp photo swaps.
Fred McClimans: Exactly, exactly. I’ll make just sort of a quick prediction here before we move on to our Fast Five. I’m going to say that if that doesn’t fact happen with the California Consumer Privacy Act, it will probably be embraced more readily in blue states than red states.
Shelly Kramer: Oh, absolutely.
Fred McClimans: Just a guess, just a guess.
Shelly Kramer: I think that’s a safe guess. Safe guess.
Fred McClimans: So with that, let’s move on from FaceApp, really interesting discussion for our listeners as well. We will in our show notes include links to some of the topics that we’ve talked about here in our main dive. But now, it’s to our Fast Five, five things that caught our eye this past week that we think you ought to know about. And Olivier, I’m going to kick it off with you here. Huawei phones, they’re Android now but I thought they were shifting away.
What’s going on there?
Olivier Blanchard: Yeah, everybody thought they were shifting away. So just to frame this, the trade dispute between China and the United States kind of resulted in a sort of weird shadow ban of Huawei products. And for a few days or a couple of weeks, a lot of US companies kind of severed ties with Huawei, at least temporarily, in trying to avoid running afoul of this US ban on selling technology to Huawei. And one of those companies was Google. And so, there was, and Google is, as you all know, the company that built and operates Android.
So, what you have is Huawei found itself in a position where it might not be able to obtain any new supports from Google in terms of Android on Huawei phones. And there was a story that claimed that Huawei had been working on a separate kind of unique operating system for its phones for quite some time, and that it was kind of like their plan B. And if the resolution or a resolution of this trade dispute didn’t arrive at some points, Huawei might shift from being an Android a market and switching to this proprietary OS.
Well, as it turns out, we found out this week that that is actually not the case. And it was misreported or mischaracterized in some way. And Huawei now says that its operating system, which at least the anglicized version is Hongmeng is actually not a mobile OS that could rival Android. That it’s actually more of a commercial OS for completely different applications, it’s much more basic than what they would need to compete against Android or iOS on their phones. And so, reports of a proprietary Huawei phone OS have been overblown. And this came I think in the last 24 hours from Brussels, and it comes directly from Huawei Senior VP, Catherine Chen who made this clarification this week. So there you go. I guess Huawei will continue to be Android.
Fred McClimans: Which brings up sort of the follow up question. If Android is going to be the OS of choice for Huawei, is there still that risk that the US puts Huawei on to the deep six list for technology transfers that would include Android operating system?
Olivier Blanchard: The quick answer to that is I don’t think so. I think what, and the US has kind of relaxed its overarching ban temporarily, so it could come back. But I think how this ultimately goes is, there may be a split between how the US government treats Huawei equipment and Huawei devices. And I think that there’s probably a good reason for the US government to continue banning certain types of Huawei equipment and technology transfers that deal specifically with that equipment. And then the kind of more benign less of a risk devices like laptops and phones and tablets that Huawei provides, which don’t really constitute a threat. And there’s not really a national security issue there. So I think we may see that split.
Fred McClimans: Got it. Got it. So, Shelly, as we step into our second Fast Five here, I’m going to lead in with a question for you that kind of relates back to your topic and our earlier conversation. If I were trying to create a deepfake video involving a particular individual, wouldn’t, let’s see, the more photos or face images I have of that person, especially as they’re aging or progressing be of use in that kind of environment?
Shelly Kramer: Well, you bet they would. You bet they would. You know, here’s the thing about deepfakes. They use machine learning to manipulate source material. And they create content which, they create content where a person appears to be doing something that they didn’t do. Okay? So we’ve already seen some deepfake videos, and in many cases, deepfake videos are used to spread misinformation. So, you know, I think that a good example is the Nancy Pelosi video that was doctored. And I can’t even remember what she was purportedly talking about. But that happened in the last several months.
And so, removing politics from it, which is kind of difficult in this political climate, but the thing about this deepfakes that are particularly frightening is the fact that you can actually, while it’s better to have multiple images, you can actually use one single still image. And they use that in a technique that’s called one shot learning. And Samsung AI researchers who are based in Moscow lab and Moscow AI lab have been working on this and I think that it’s really interesting. So there’s one shot learning and then there’s also multiple shot learning. And, of course, better results, the more shots that you have.
But this is something, okay, say beyond political and this is one of the parts of the conversation tying up to the FaceApp situation, we in general have no idea what’s possible with the information that we put out there, whether it’s a photo, whether it’s a video, whatever. But what can happen is that I can have images, I can share images on Facebook, or I can share images on Instagram and I might be dating someone who’s crazy and I don’t know it. And that someone might be really, really smart and might want to mess with me. So, he could take one image or he could take several images and he could put together a video, maybe let’s say a porn video starring me. Maybe someone involved in corporate espionage wants to put together a video grabbing images that they find available and manipulate stock price. Maybe somebody just wants to wreck the hell out of somebody else’s career.
So when you look at all of the things that are possible, yeah, you know, politics is just one part of it, but the technology exists to use the information that all of us put out there on the web on a regular basis and create content that is completely fake that could have very, very big ramifications.
Fred McClimans: Interesting, very interesting, and scary. The technology that we have at our disposal is so powerful and can be so much of a creative tool for people but there’s also that flip side, the dark side there. So the one shot learning, I’m going to have to take a look at that, that sounds, sounds like it’s worth following up on here.
So as we move into our next Fast Five, I’m going to talk real quick here about Twitter because we are all often on Twitter quite a good bit of time. There’s a lot of information exchanges, there’s a lot of conversations. And one of the things Twitter’s been working on for a long time now is improving the way conversations happen on Twitter. And one of the things that they started doing earlier this year was making it clear when you were looking at a Twitter stream, a string of tweets, who the actual author of a tweet stream was and what role others may have played in replying to or in forwarding along that content. Just simply to make it easier to track.
So they tried a few things, including putting little text banners alongside people that had replied to messages. They tried using literally the word author next to the person if they commented on their own tweet down the stream.
They’ve been progressing with that, they’ve been improving it. This year, this past week, they actually came out and they started to change their approach a little bit. They’re now testing little icons that would indicate whether somebody was the author of something or whether they were simply replying or forwarding it on.
It’s an interesting thing. It’s worth noting because if nothing else, it points to the fact that Twitter just really doesn’t yet know what they’re really going to be when they grow up and you know, what it really takes to drive growth in conversation and transparency and so many other things on their platform. Much like in the same way in the past week, they opened up and they said that they were now going to start providing additional information about tweets that are hidden from you.
So, very so often, you’ll scroll through your Twitter feed and you’ll see a message that says this content is unavailable. That’s something that a lot of people have jumped on saying, oh, censorship, you’re censoring this from me. But in most cases, that’s usually just your own user setting saying, please don’t show me content that might be considered profane or something of that nature. But they’re going to start to provide more information about that. Hopefully, they get this right because the platform itself has become such an important part of our social, political and business fabric that it’s worth taking the time to make sure that we get it right.
So with that, Olivier-
Shelly Kramer: Fred, if I can interrupt you real quickly and say, you know, when I was reading about this, I’m looking at the images in an article and I see the first images where it says author. That’s actually what I’m used to because that’s what you see on LinkedIn and it was just kind of the norm. And then when I was looking at an image where they made the change, it took me forever, I suppose I’m an idiot, but it took me forever to figure it out and it’s oh, the microphone. But it actually made me work harder to tie the images back to who was saying what than it did just saying this person was mentioned. And this person is the author.
So it’ll be interesting. In the article that I read about it, it says, it’s also unclear why Twitter thinks users are clamoring to see this information. So it’ll be interesting to see how this is embraced and adopted and everything else.
Fred McClimans: Well, I can tell you, if they go to smaller icons, I’m going to have to get a larger phone screen. It’s already at the limit of that. So Olivier, Intel, AMD. These guys go at it every day, every week. What’s going on?
Olivier Blanchard: Yes, they do. You know, there used to be a time when Intel and AMD were kind of in different lanes. They’re chip makers by the way in case anybody who’s listening to us doesn’t know who AMD is or what Intel does. And traditionally, Intel kind of focused on I guess higher clock speeds and to some degree efficiency. And then AMD was known more for high core counts and multi-threaded performance.
And so there wasn’t that much overlap really but now there is, and Intel and AMD are just basically kind of duking it out in the computer chip manufacturing industry. And Intel is probably a name that everybody knows super well. You see the Intel sticker on most of your laptops. I’m actually looking at, I have one laptop in front of me that has Intel sticker on it and another one that has AMD on it. And that perfectly illustrates the state that we find ourselves in.
Well anyway, even though Intel has much more name recognition than AMD with the general public, it’s actually fallen behind AMD in some pretty key areas. I love Intel. What they do well, they do super well. But there are things that they kind of still struggle with a little bit. And as we discovered earlier this year when Intel decided to get out of the 5G modem business, when Apple decided to go back to Qualcomm for its 5G modems, and iPhone modems in general period.
But now we discovered this week that Intel is also having trouble bringing seven nanometers chips, nanometer, I can’t talk today, chips to the market. And while chip makers Qualcomm and AMD already have announced seven nanometer chips on the market, and some of them are actually already out, Intel just announced this week that it will not be releasing a seven nanometer chip until 2021, which is not super embarrassing but it definitely shows the extent to which Intel, like a lot of other companies sometimes doesn’t focus on the right things or does but struggles to actually deliver in areas that are a little bit on the edge of its own envelope as opposed to other companies that might not do as well in the kind of like the general categories, but are really good at innovating and pushing the envelope.
So Intel is releasing a 10 nanometer chip this year, but you’ll have to wait until 2021 for it to catch up to AMD and Qualcomm. That’s that.
Fred McClimans: For a company that lives or dies by Moore’s Law and the ability to continuously every 18 to 24 months double the number of transistors in a given space, this is, it’s a big thing. But at the same time, I just want to point out to everybody that seven nanometer has its own inherent issues in going smaller or narrow within seven nanometers. Some very significant issues given the hardware technology that we have today. There are other things coming along, optical switching as well as some really cool things in 3D chip design that I think may obviate the need to get to seven as quickly as some people would like for every application.
So with that, we have one more Fast Five to go. Shelly, take us home here with Amazon and what $10 can buy you or buy Amazon perhaps.
Shelly Kramer: Well, Amazon rolled out, I’m fascinated by consumer behavior and Amazon rolled out a promotion in conjunction with its annual Prime Day. It was really just like a 48 hour marketing blitz. And the big hook was that you could get $10 of credit on a $50 purchase if you installed Amazon Assistant on your browser, and you have to be a Chrome user or a Firefox user for it to work. And basically what installing that assistant on your browser does is pretty much tell Amazon every place you go on the web at all times. And by the way, it’s not like Google doesn’t already have that information, certainly if you’re using Chrome and all the handy dandy plugins that Chrome provides. But I think it’s a brilliant move on Amazon’s part and I think it’s indicative of how much, how easily people are willing to give stuff away for 10 bucks.
Really going back to your conversation that we had earlier, when a big tech company says something’s free, it’s you that is the product and your data is what Amazon is interested in.
Fred McClimans: It always comes back to the data. In the digital age, it’s all about the data.
Shelly Kramer: It is all about the data.
Fred McClimans: So thanks Shelly. So moving on here, we are now at that auspicious moment in our podcast where we entertain tech bytes, something that’s happened or that’s come to light in the industry that involves technology that just kind of makes us shake our heads and say, you know, please, something has to be fixed here. In this week’s edition, we’re going to dive a little bit into the world of politics for a very brief tech bites.
Steve Bannon, you guys remember Steve Bannon of Trump election campaign fame from back 2016, ‘15, ‘16 and so forth. The now, not in the US, now in EU Steve Bannon. There is a documentary about Steve Bannon called The Brink, that has been put together. The Brink included a number of interesting things. One thing that it did not that was left on the cutting room floor was a section of interview with Steve where he talked about their get out the Catholic vote strategy in certain areas.
In particular, they were able to go to, according to Steve, they were able to go to the phone providers, into the data mart providers, where data is bought and sold, and quite literally asked for geo-fencing data or data based on application usage or cell phone usage for everybody in Dubuque, Iowa who had visited a Roman Catholic Church. Not only were they able to ask for the data, they were apparently able to buy the data and then use the data as part of their get out the vote campaign targeting those individuals with specific advertisements that they thought would be appealing to them. I don’t know what those advertisements were, I can only imagine.
But geo-fencing location data, that is a huge thing in our economy today. And there are companies like Foursquare that we’ve talked about that, you know, one of the pioneers in the check in that nobody really hears about today, they’re still incredibly active out there in gathering this data. Now, and I’m not linking Foursquare to this particular instance here. Most of the phone companies, most of the data providers, they say, yes, well, the data is anonymized. But quite frankly, there’s a certain point in time at which you get so much aggregated data that even if it’s anonymized, if I know that this person has been at the following five locations and I know they have visited this church and that store location and this school over there, I can pretty much guess where they are, maybe even who they are. And I can certainly target them with advertisements.
We’ve talked about consumer privacy rights and data privacy rights and so forth in the past. One thing I would just put out there and ask each just for 30 seconds quick feedback on here, does it make sense for us to actually have certain carve outs, just like you can’t carry a concealed weapon into a school or a church or a public library or facility, should we not have the same thing that says quite literally, look, you can’t track people going in and out of religious houses of worship? Shelly?
Shelly Kramer: Well, first of all, geo-fencing is out there all day every day. I mean, you walk down a street anywhere and you might be getting served up ads, hey, are you hungry, well, come into this restaurant or whatever.
Fred McClimans: Minority Report syndrome, yeah.
Shelly Kramer: It’s very common. And I think the differentiator here is that, oh my god, it’s a Catholic church or any house of worship and isn’t that terrible. I think it is terrible. Do I think that in all candor that this is something that anybody’s going to care about enough to do anything about? I doubt it. Again, I don’t think that the ordinary average human being has any idea what is happening with regard to their data, with regard to geo-tracking with regard to what is available based on the devices that each one of us have in our hands, in our handbags, in our pockets all day, every day. People just have no idea.
And so, I’m not really sure that this is the kind of thing that you’re going to see people up in arms about because they’re just not paying attention. And I’m not sure they really care. And the reality of it is, you know, I think back to, I have so many friends that are in their 30s, 40s, 50s, 60s that have no idea even how to use their phones. Are you that person, for instance, when you go out with people or something, you say, hey, you know, if you just did this, it would change everything about your experience with their phone, they have no idea what you’re talking about and you’re kind of their tech support? Well, I think most of humanity is like that. They just don’t know. So, I do think it’s terrible.
They also, you know, in this instance, they partnered with CatholicVote, which is a conservative group that has a very real investment in how the Catholic population votes. So yeah, I think it’s a bad thing. Are we going to see anything happening about it? I’m not so sure. Olivier, what do you think?
Olivier Blanchard: I think that if we see something happening, it will be in the kind of embedded in the broader scope of digital rights or data rights, like a GDPR for the US. I think that should absolutely be included in it. And to me, it just all boils down to opt in. I think that as consumers, we should have the right to opt in or by default be opted out so that you have to physically opt in to allow your data to be collected.
And so, what bothers me is there’s kind of like a double layer here. There’s the kind of like the basic layer of when you download an app and you create an account for the app and you set up your preferences at that point, then there’s kind of like this tug of war of is my data worth the experience or the value that I get from the app? And ideally, you would be able to select which data collection schemes or which types of data collection you allow versus the ones that you don’t. But at least there’s a transaction there that you can point to.
When it comes to phone companies selling your data, and especially your location data to third parties without necessarily, it’s not even a question of consent, it’s without your knowledge. So if there’s no knowledge, there can’t be any consent to begin with. I have an issue with that. And I think that we need laws to allow consumers to be notified of this and to be able to opt in or opt out of that kind of data collection and decide how the data is going to be used.
And then the third point is actually with user experience. I think that every device manufacturer should be forced in every OS to have kind of a slider option of opting in and out of the frequency or volume or at different times or different places when you’re going to be hit with ads. And this is something that I’ve been kind of championing for nearly a decade now, where if you can have a slider on your screen to turn on your phone or to access a different screen, it’s not that difficult to be able to turn off notifications or turn off ads or turn them back on depending on whether you’re busy and you’re in a place of worship and you don’t want to receive ads or don’t want to have your data collected in that moment. And be able to turn it back on when you’re okay with your data being collected or you’re okay being hit with random ads from third parties.
So, it’s kind of like this three-tiered model of legislation that addresses the UX, the data collection and the opt in, opt out.
Shelly Kramer: Well, I do think the California Consumer Privacy Act is going to play a big role and I think it’s going to get the ball started rolling. And I also think that really consumer awareness is at the nascent stages. And as people, you know, I think if you ask people, they’ll say they’re concerned about their privacy and their data, but I’m not sure that they really realize that what they do with their devices impacts their data security.
And I think there’s also a certain segment of the population, I was getting on a flight a couple of months ago and I was in the DFW Airport. And they were using facial recognition technology as part of the check in process. And they didn’t ask you if you wanted to do this and they didn’t ask you if you wanted to opt out. They just told you that when you were boarding the flight, they were going to be using facial recognition technology. And so, I was talking with some of the people that happened to be standing around me at that point in time about how that really is kind of an infringement on rights. And it was funny because a woman said, “You know what, here’s the thing, I don’t think it’s a big deal. I haven’t done anything wrong. If you haven’t done anything wrong, why should it matter?” Let me take a scan of my face.
And so you laugh, but I think in many people’s minds, it’s as simple as that. I haven’t done anything wrong, I don’t do anything wrong, so what does it matter if my face is in a database? What does it matter if I’m using the FaceApp app? What does it matter? And they have no real grasp on the more broad reaching implications and how that could affect them negatively in the future potential.
Fred McClimans: That is a huge challenge. And Olivier, I’ll do two things here as we close out this tech bites segment. The first is I’ll tease a little bit. We do have some research coming out in a couple of months that will actually shed some really interesting light on the opt in versus opt out issue. And the second thing is, I think the day we go to a mandatory opt out model, I’m sorry, a mandatory opt in model is the day that we take 50% off the value of the NASDAQ and the New York Stock Exchange. Because that would collapse every business that we have out there today.
Olivier Blanchard: No it wouldn’t.
Fred McClimans: Yes it would. We’ll talk about this offline.
Shelly Kramer: We’ll debate that.
Fred McClimans: We will. Shelly, Olivier, thank you very much for your being part of this podcast this week. I do want to close this out with our crystal ball prediction. I know we talked about a few things before, I’m going to change it up a little bit here and kind of lighten it up a bit. Put on your prognosticator’s hat. This month the hot social meme is FaceApp and it records your face. 12 months out, give me the name of the popular social app and tell me what it does? Shelly?
Shelly Kramer: I really, I don’t work on the fly like that quite so much. You know what, Olivier, you go first darn it.
Olivier Blanchard: I’ll go first. I think it’ll be called FAKR, so it’ll be a F-A-K-R. And it’ll be a deepfake GIF maker. So basically, you’ll be able to CGI yourself to pretty much anything you want and create GIFs on the fly.
Fred McClimans: I love that. I agree with you except GIF make, not a GIF maker.
Shelly Kramer: Oh, I love that conversation.
Olivier Blanchard: Graphics.
Fred McClimans: Yeah, yeah, yeah. Okay. So Shelly, back to you. You’ve got 20 seconds now on the clock. What do you think that app is going to be called and what might it do?
Shelly Kramer: I don’t have anything better than what Olivier has said. I’m sorry. I don’t. I don’t have anything better.
Olivier Blanchard: Honestly, I should not have let the cat out of the bag. This is actually a side project, I’m going to make billions.
Shelly Kramer: I think it’s going to be an app that has something to do with replacing all human contact whatsoever so that we never have to leave our house, have a physical interaction with another person. Anything you want, you can just get. Sex here, virtual reality sex doll. Okay, food, here, have that delivered. I just think that we’re going to be living in our rat holes enabled by some kind of app somewhere that eliminates all human contact from our lives.
Fred McClimans: Perfect. I love it. By the way, Olivier, just to let you know, there is a website out there already called fakr.com, F-A-K-R.com. Interestingly enough, it has nothing to do with deepfakes. It’s a site owned by DerbyFever.com, a division of SmartAcre LLC, where you can play the ponies.
And with that, I’m going to call this episode of the Futurum Tech podcast, FTP as we like to call it, a wrap. Shelly, Olivier, thank you very much. On behalf of Shelly, Olivier and Dan who was off in Europe having a great time this week, I’d like to thank everybody that’s listening to today’s podcast. Please, if you have comments or feedback, let us know. Hit the subscribe button, share with your friends. Let us know what you’d like to hear about and we will be back next week with a new edition of FTP, The Futurum Tech Podcast.
Photo Credit: Geek.com
Disclaimer: The Futurum Tech Podcast is for information and entertainment purposes only. Over the course of this podcast, we may talk about companies that are publicly traded and we may even reference that fact and their equity share price, but please do not take anything that we say as a recommendation about what you should do with your investment dollars. We are not investment advisors and we do not ask that you treat us as such.
Author Information
Fred is an experienced analyst and advisor, bringing over 30 years of knowledge and expertise in the digital and technology markets.
Prior to joining The Futurum Group, Fred worked with Samadhi Partners, launching the Digital Trust practice at HfS Research, Current Analysis, Decisys, and the Aurelian Group. He has also worked at both Gartner, E&Y, Newbridge Networks’ Advanced Technology Group (now Alcatel) and DTECH LABS (now part of Cubic Corporation).
Fred studied engineering and music at Syracuse University. A frequent author and speaker, Fred has served as a guest lecturer at the George Mason University School of Business (Porter: Information Systems and Operations Management), keynoted the Colombian Associación Nacional De Empressarios Sourcing Summit, served as an executive committee member of the Intellifest International Conference on Reasoning (AI) Technologies, and has spoken at #SxSW on trust in the digital economy.
His analysis and commentary have appeared through venues such as Cheddar TV, Adotas, CNN, Social Media Today, Seeking Alpha, Talk Markets, and Network World (IDG).