Episode Seven – Other Development Team Roles and Apple's Child Safety Measures
E7

Episode Seven – Other Development Team Roles and Apple's Child Safety Measures

Summary

Ash and Ian give some love to those other development team roles - the ones that save you from wasting loads of time on hippos - and then they try to decide how much outrage is appropriate for Apple's proposed Child Safety measures which have, since the recording was made, been postponed.

# What A Lot Of Things Episode Seven – Other Development Team Roles and Apple's Child Safety Measures
===
Ash: You start

CHAPTER 1: TECHNOLOGY EEYORES @ 00:00

Ian: Hello Ash!

Ash: Hello Ian, how are you

Ian: Oh, I'm top spiffing.

Ash: Top spiffing?

Ian: Is that a thing you can be? I don't know where that came from really?

Ash: It's a good start. It's not a dour "Not so bad".

Ian: That's true. That's a very British response isn't it.

Ash: Absolutely

Ian: Bearable.

I've just won an Olympic gold medal. It's all right.

(That's never going to happen.)

No, I'm just very excited because of our five-star review.

Ash: I was incredibly excited when I found it, I messaged you straight away. I think it was the first thing I did.

Ian: Yes! We were swept away on a giddy tide of delight.

Ash: Well, it's not every day you get a five star review for something you've created

Ian: no, that's true. That is true.

Ash: And the internet can be a harsh place, Ian

Ian: That is also true.

Ash: there's a lot more people who would give you not a five star review, then would give you a five-star review.

Ian: Yes. And that's quite likely to happen if you make a terrible, terrible, terrible, mortifying error when you're recording your podcast.

Ash: And who would do that?

Ian: Well, as it turns out, me.

Ash: Not again, Ian.

Ian: What do you mean not again? You can't say that! This is my first error ever. I've been perfect up to now.

Do you remember last week when we were talking about the state of DevOps report?

Ash: Platform teams!

Ian: It wasn't last week, was it?

Ash: No it wasn't, you've made a, you've made like a reverse estimate there.

Ian: Do you remember last time

Ash: that's better

Ian: when we were talking about the state of DevOps report?

Ash: Platform teams. I think it was, wasn't it?

Ian: Yes, it was. And I was very much taken with the fact that a post on Martin Fowler's blog was worded extremely similarly to the words that you'd found for description of a platform team.

Ash: And why was that? Ian? Can you remember why that was.

Ian: Why they were so similar? I just remember finding it amusing that they were so similar.

Why were they so similar?

Ash: Because the same person wrote both of them.

Ian: Okay, that's why.

So Martin Fowler's blog is a rich environment. Richer than I had suspected in that it contains words that come, not just from Martin Fowler. And on this occasion, the blog post in question was not written by Martin Fowler, and I... to my embarrassment, I read it too quickly and didn't pay attention to what it actually said on the page.

So sorry, Evan Bottcher. And thank you. Manuel Pais for pointing that out on our LinkedIn post.

Ash: Credit where credit's due. Very important.

Ian: Yes. So obviously we're very keen to hear about anything we say that's wrong, or at least factually wrong.

Ash: Not that keen!

Ian: Ok...

Ash: Could be here for a while.

Ian: that is true. I'm sure I'm about to say lots of things that some people are going to think are wrong. So I think what I mean is, if I make a factual error, that's demonstrably provable, and goes beyond opinion into the realms of this is an actual fact, then yes, I would like to hear about that.

But if you disagree with my opinion about any of the things we talk about, then fine, have your own opinion and enjoy it in your wrongness.

Ash: Just back on the on the five-star review for a second as well. I enjoyed the use of the phrase, "shabby mundanity of why things go wrong". I thought that was absolutely beautiful.

Ian: There's a poetry there. Isn't there.

Ash: "I liked that you can hear the weariness in their voices like two Technology Eeyores recording a podcast".

Ian: I think we should change all our artwork. Maybe the Clanger can retire and we can just have Technology Eeyores instead.

Ash: Maybe we've come up with a new name.

Ian: I love being a Technology Eeyore.

Ash: I didn't realize I aspired to it until now.

Ian: That is a wonderful thing.

Ash: Absolutely. And thank you.

Ian: Yes. Thank you very much indeed for your very kind words that made our day, in fact, made several of our days.

Ash: Yep, indeed.

Ian: it made the day when we first read it. And now it's making the day when we're talking about it.

Is it time to talk about things?

Ash: I think I get to talk about a thing first this time

Ian: You do. It's your turn to monopolize the airwaves as we call it.

Ash: monopolise the airwaves... begin the monologue...

Ian: Yes. The internal and something.

CHAPTER 2: OTHER DEVELOPMENT TEAM ROLES @04:41

Ash: So my thing is... Other development team roles. So... what do I mean by other development team roles?

Ian: That's an excellent question. Which I was about to ask you....

Ash: All right ok, so the summary that I put in was, I may be paraphrasing here, but those who can save us from building a load of old rubbish expected by the highest paid person.

That's what I mean by other development team roles, because usually developers and testers aren't particularly capable of not building a load of old rubbish expected by the highest paid person.

Ian: That's interesting actually. Cause isn't the highest paid person sometimes the person whose budget is funding the project?

Ash: It doesn't mean they know what to do with it though, does it?

Ian: Well... quite evidently not. Yes.

Ash: We really shouldn't equate wealth with success or intelligence

Ian: No. could make some we could come up with some examples, but probably best not.

Ash: So specifically, so how did this interest come about in these other development team roles? So I would say I was fairly ignorant of design, user research... They were things that were done over there somewhere else. And usually it would be... designers would come up with fantastical designs, with a grand new vision, which would then take months and months to implement because, it would be essentially a massive redesign of everything.

And then you'd be surrounded by lots of sad designers as you gradually try and eke your way through the simplest changes you could make.

So I was used to teams full of developers, testers, and a business analyst, or some kind of analyst, usually a proxy for the product owner. And we just used to build stuff, big lists of stuff from long lists of requirements.

And then these requirements would be years and years old sometimes, and we'd build this stuff and then it'd get released. And then you would never hear of it again, whether or not it solved any problems met any needs or well, anything really. And I didn't really question this too much, but then I guess I started to learn a bit more about the industry.

There was a couple of links as well that I came across – there was one on Mike Cohn's blog about something like 60 odd percent of what you build is never used

Ian: Urghh..,

Ash: Or rarely used. Sorry, I should probably qualify that.

Well it struck me cause I'd be like, well, have I been building lots of stuff and hardly any of it will ever be used or, somewhere between 10 and 30% is actually going to be used on a regular basis.

So wouldn't the world had been much better if we would have focused on that 10 to 30% and found out what that was rather than just working through the list of requirements.

Ian: And maybe done it better as a result.

Ash: And then, so this stayed in the abstract for me until I went on a contract as a tester at the Co-Operative in Manchester. And the team was like nothing I'd ever seen. It was full of user researchers, interaction designers, and technical writers. And then there was a couple of ops engineers, and then there was just one developer and one tester, me.

So there was liike eight people and there was one dev and one tester. And I was like, what is going on here? This is madness.

Ian: It sounds a bit like one of those scenarios where you have eight people standing around the developer all shouting things at them to do it, I guess it didn't turn out like that.

Ash: No, it was all very respectful. And we spent loads of time sharpening what we were going to do, what we were going to build. So whenever we did push code, it was with a relatively high degree of certainty that it was what the customer was going to use, what they would want to see.

It was just amazing to be surrounded by all these different roles.

Ian: It's just making all sorts of things, click together in my head around design thinking, which we talked about in a previous episode, and other stuff like that, but the idea that we should get together and make sure that we're solving the real problem and not some highest paid person's idea of what the problem is, I mean that's really powerful isn't it?

Ash: For me, it was quite transformative, really in the way that I thought about what a team could be, what a development team could look like. And rather than having the other roles on the edges of things and the developers and testers being the sole focus. They could really change things.

Ian: Would it have been better... do you think those numbers worked? I'm just, I'm curious about the one developer, because I have seen, I remember going to a hackathon once where there was quite a preponderance of people that weren't developers and there was a couple of examples where you saw people all standing in a row around the poor developers, build this, no, build that, no. And that's a bit of a parody... Cause there's obviously the problem of finding the right thing to build, and also part of that is the input of developers and testers and other people who can understand how hard things are or how easy they are.

Ash: Yes

Ian: There is a kind of proportionate thing there... I'm just wondering, would it have been better, for example, if there were two developers or something like that?

Ash: I think probably the nature of the project at the time or the product at the time, because it was a brand new product. So the balance of having more researchers and more, say, ops people was probably a bit better. But I think in the fullness of time, once your operations were established, then you might well change that blend. But I guess it's that the overall balance is what you're looking at isn't it - it's like traditionally the balance was way over the other side. So you would have hardly any user research going on and lots of development going on.

Whereas maybe there's something in the middle that you need to find as an organization in order to help you to do that better, and just build that, that 10 to 30% of stuff that people actually want.

Ian: I spent some time in e-commerce environments and they always seem to have a lot of designers and researchers and people like that in those environments. Basically, they've got this very simple metric of do people buy more things and the ability to have a really short feedback loop on on that.

So quite a lot of it is actually devising what experiments to do and then executing them and then immediately making decisions and moving on and doing the next thing, whereas in some other industries where maybe sometimes the outcomes are less clear, it can be perhaps a longer loop and maybe people have less awareness of that.

Ash: I think that's fair. That kind of contrasts with subsequent projects that I've worked on since then. Especially in government contexts where you do have a lot more stakeholders, there's a bit more listy the things that need to be achieved because that's how things are planned.

Rather than having a long running product team, you've got a very project-y type mindset where it's, we need all of these outcomes. Whereas in reality, you're working on it and you think no, we don't. We need to find the 10 to 30% of outcomes that will make the most difference.

But, because if you've got those extra layers of management and people who are far removed from the work, they're more interested in, and are incentivized by, delivering the list rather than delivering the 10 to 30% of stuff that would make the most difference. So yes. But still, I still think you could make gains in those environments as well. You can still make sure that you're doing user research.

I think in a lot of gov UK projects that I've worked on, there's been a lot of emphasis on user research as well within the constraints of a slightly strange project based model. But it's not an afterthought, I don't think.

Ian: No, they have, of course the Government Digital Service and that whole methodology that has a lot of user research and making sure you're giving the user of government services a good experience. And they have quite a lot of.. or well they did anyway... It's been a while since I've worked in those environments, but they had a lot of power to, to change projects.

Ash: So certainly one of the projects that I worked on was absolutely fantastic in that regard that the closeness with the stakeholders was great, even though it was multi-site it felt like they were genuinely involved with the decision-making about what got built and what it looked like.

And the research into how they work and what they are trying to achieve, went into it as well.

Ian: Yes.

Ash: There's a lot of good there to bring those aspects into your development. I think naturally as well, if you do that, you start to shape things in a slightly more effective way.

As in you start to build the things that, that matter the most in a way that complements how people work rather than, foisting on features onto people who were you sometimes, even in the same building as you. I remember working on something for a mortgage admin company, and literally the people who were going to use the thing were upstairs.

But there was such a disconnect between the people who were building it and the people who were going to use it. And it wasn't encouraged to go and bridge that divide either.

Ian: If you went up the stairs, it all went silent and everyone looked at you like going into the wrong pub.

Ash Yes, yes absolutely. Pool ball stopped over the pocket.

Ian: Yes.

Ash: It was a really transformative thing to, to work on and realize that these roles had such a massive impact. And there was just constantly building gradually increasing fidelity of prototype is probably how I would describe it.

Because again, it teaches you that building something in code, testing, it, deploying, it is a really expensive way to find out if someone wants it or not.

It's like the most expensive way to find out if someone wants it or not. I just thought they were, really skilled at finding out what level of prototype was the best for the situation? Whether or not it was just paper or Figma... I don't know if you've heard of Figma?

Ian: I've got an account on it and I, occasionally use it, so yes, I like Figma.

Ash: Ian subscription mania.

Ian: yes, that's me. Mr Subscription they call me.

I remember when I was first learning about Design Thinking and hearing people saying the best time to fail is when you're just screwing up a post-it note and throwing it. The cost of that failure is five minutes of having that idea and then five minutes of knowing it's the wrong thing, and then screwing it up and putting it in the bin.

Whereas the wrong time to discover that is after your half a million pound development project. And then there's a sort of sliding scale I suppose, in between. I remember watching... Google ventures or somebody did a very good "how to do paper prototyping" video.

It's not a big watch, but it basically showed you how to how to build paper prototypes, how to how to use them. It is just a million times better to take someone through that experience when it's just a few bits of paper.

Than going in and actually building it as you say, that's very expensive.

I'll include a link to that, those videos because I think they're genuinely very very good

Ash: I just remember... this is all coming back to me now.

One of the first projects I actually worked on, we worked on it for a year, a team of 10 people. And it was quite stressful as well.

It was an add-on for your bank account. For this particular company, I can't say too much. But it was released at the same time as when the PPI scandal really hit. So banks then ceased to to sell add-ons for accounts. So this thing spent a year on with this PPI scandal brewing in the background, I think I went and looked about a year later and I think two or three people had signed up for it and it had literally cost millions.

Researching what's going on with your stakeholders, the payback is immense. Absolutely immense.

Ian: Even what's the blinkin' industry that it's in...

Ash: Quite think. I think it does go to that level as well, doesn't it? It's like what's happening in the industry,

Ian: Yes. Just avoid the cost of building the wrong thing.

If you actually look at that number that you started off with, was it something like 70 or 80% of things built are the wrong thing in some way? If you could avoid building all of that and spend that money on building the right thing, how much better a place would the world be in terms of software anyway.

Ash: Yes yes... absolutely.

I think there's a question of timing as well, isn't there? Because the problem with working through a list of requirements is that naturally that list starts to go out of date.

When you create the list, many of those things on the list might be the right thing, but then eventually they become the wrong thing because it's the wrong time for them, or someone else has already done it, or whatever it is...

How do you then account for that well you say, rather than having a long list, let's have a short list and try and sharpen that shortlist as, as best as we can.

Ian: I was going to say if only there was some way of of delivering things one after another. Rather than delivering a list in a giant release at the end of two years.

Ash: I don't have a way to do that, Ian

Ian: That would be revolutionary, wouldn't it? If that could be done.

So I guess the question that leaps to my mind that I guess my, what I've been thinking about is what I just articulated there about how do you make sure you're building the right thing?

If you're an executive in a company, then you've probably got quite a lot of power to look at team composition and make those kinds of changes. But on the other hand, what if you're in one of those teams, that's building a wrong thing? I wonder if there's anything that someone in one of those teams can do.

If you're a developer, can you start doing paper prototypes and showing them to people and saying, this is what I'm going to build and getting their feedback.

It seems like quite an ask...

Ash: Mmm.. yes yes, absolutely.

Ian: I'm not sure there's an answer to that...

Ash: I think, in the past, it's usually got to the point where we've built the thing

One of the things that I did in the past was, advocated for the ability to show your local environment on a device.

It was in a mobile development context? So it was, how show this locally? And then I can just go and grab the stakeholders and say, "Oh, Hey, this is what it's gonna look like; We've only done like a day's worth of development on it so far. What do you think?" So, I guess my advice there is to try and like nudge it back a little bit, just gradually try and move the feedback to the left as best as you can.

But I do agree that as an individual contributor on a team, it can then be hard to make the big changes. Like you said, going all the way back to paper prototypes, but I don't know. I think we ask for permission for these things far too much.

Ian: Yes, That is very true. We should Be More Pirate...

Ash Yes, I think so...

Ian: ... to cite the title of one of my favorite books.

Ash: I think so. I think sometimes in software development. We're a bit too passive with these things and actually we've got a lot more to give in terms of deciding what to build and how it would be most sensibly done, rather than just accepting the big list and then trudging our way through it.

Ian: I remember having a philosophy of that I would trust the people that were doing... that were thinking these things through, and then having this kind of revelation at some point down the line of my career, that actually a lot of those people are just making it up as they go along.

And you can absolutely question. They might not be very grateful to receive those questions. Some people are better at that than others, shall we say. But maybe the best thing to do is to just arm yourself with better questions. And just be prepared to zoom out a bit and say "Why?". "Why are we doing this?", "Why does it have to be like that?", "What's driving this?".

And if you find out that the answer is it's a requirement in an Excel spreadsheet that somebody wrote 18 months ago, then you know, that's a very good thing to question.

Ash: I've worked on projects relatively recently that said that they wanted, six nines worth of reliability. For example, which is a few seconds downtime every month or whatever it is. Even in safety, critical systems like aeroplanes and monorails and things like that,

Ian: Monorail!

Ash: Yes, I know... we have mono.

Ian: Not that we're obsessed with cartoons.

Ash: No no, absolutely not. But that was like down as a requirement and nobody has really said anything. So it's going to cost us billions to get to six nines worth of reliability. So let's ask the question because someone's just put a... someone's asked for a number and someone's put a number in that with no idea what the implications of that number actually are.

We're going to need to work until the heat death of the universe in order to achieve that level of reliability. And then it'll be the heat death of the universe, and you know...

Ian: It'll be down!

Ash: We'll be back down to zero reliability again.

Ian: Did I ever play you the NFR rap?

Ash: Yes, yes.

Ian: Ah... maybe I should include a link to that masterpiece...

Ash: Oh yes.

Ian: ...that I did... I didn't do the rapping bit because I'm me and that would be ludicrous

Ash: A fantastically awkward rap.

Ian: Yes... yo!

Ash:

Ian: One of my colleagues on a project I was working at when I was at IBM was a chap called Chris Caseley-Austin and he was claiming to be good at rapping, and I was like "Ah yeah, right" kind of thing and so I made him, I made a him a little drum loop and off he went and came back with this amazing thing. But the question was, what should it be about? And our biggest headache at that time was trying to get the client to recognize that non-functional requirements were a thing and that they needed to be quite clear about them, whether they're about their five nines or whatever it was that they were after...

Ash: I'll have eight nines please.

Ian: ...yes... eight nines. And so the NFR rap came into being and it's fairly entertaining. So I'll include a link to it. I'm quite proud of it actually, even though the hard bit was the rapping and the lyrics, which Chris did. I just did all the other things. You won't notice those.

Ash: Generated drum beat on a loop, and...

Ian: Yes. Well. Have a listen See what you think. Not you Ash, obviously you've listened to it. already once there's enough for anybody, isn't it.

Ash: I think as well the technical writer role is hugely overlooked in development teams. We all do our best to write sensible things into Confluences and wikis and things like that.

Ian: Okay.

Ash: As a contractor for the last 6-7 years who relatively regularly regularly changes roles and needs to pick up a new system, or a new technology or a new domain relatively quickly. More are absolutely impenetrable, and you just end up having to spend most of your first few weeks trying to get everything up and running. So I think having a technical writer to help you to talk about how to run your systems and then also to put your systems in the context of the rest of the company and the rest of your domain would be absolutely immense.

Ian: I think that's a, the skill required to do that and to be able to be in the naive position of a user. To write down the things that that person needs to know to do the thing. A developer of something has got almost no hope of being able to do that because we're just human and we can't.

Ash: And also you come at it from a point of view where there's a lot of assumed knowledge if you've built the thing. And you've adapted to its foibles as well.

Ian: Yes.

Ash: It always reminds me of of trying to add automated tests to systems that don't have them because essentially it just tips out all the strange things that those systems do, which you then need to work around in order to automate what it does.

And it's similar to me with technical writing as well, because it's just that, that act of tipping it all back out again and saying "Well, actually you're doing this really weird thing to work around this sort of slightly strange thing." And you're passing that onto your user as well, but a technical writer, can help you tip all that stuff out as well.

So my final point on other development team roles was that I read a book by the chap who who founded a company called Menlo Innovations. I can't remember what it was called... we will link to it. And each team had a software anthropologist to go and find out about their users and their history and what the stakeholders really wanted and deeply understand their context.

Which I think it sounds to me like it would be a hard sell for most places to say, we need to hire software anthropologists, I just really liked the idea. So I think it describes a bunch of work that we do as technologists, which is never described anywhere else by the role of developer or tester or whatever it is because as a tester, most of the time you're discovering information and curating it and putting it in some kind of sensible order that someone can understand and someone can make a decision out of.

So I think there must be some more novel ways we can describe the things that we do that go beyond what we currently call ourselves and give a bit more insight into what development teams are actually all about. Cause there's a lot more going on than programming and testing.

Ian: I think that's a really great sort of note to complete on. I also think that you should allow any user researcher to rename themselves was it a software anthropologist because that will make people ask really interesting questions of them, where they would get to explain what they do. Because people, when they hear a title a few times, people start to think they know what it is.

Ash: That's very true

Ian: And it might be interesting way to get people to say hang on a minute... do I really understand this the way I think?

Ash: Every time that I've worked on a project, which has involved some form of legacy code, I dread to use the term but a legacy application as in "something that makes money".

I've always thought we need like a historian to understand the primary and secondary sources of this thing, and what's true, and what's not, especially when someone says we need to rebuild the thing. Can't it just do the same thing as what the old one did?

What did it do? No one knows. Everyone has a thought, but as a tester, often you will go in and say, someone will say this thing did this, and then you go and test it and you're like, "no, it doesn't".

Ian: Yes, "it is its own documentation".

Ash: Yeah. So I think there's there's something there as well. I think there's just something about describing our roles slightly differently. Martin Hynie did a great talk about renaming testers in his organization to something along the lines of skilled investigators, and it increased the engagement between the testers and the wider development function by a great deal.

So I think that one's definitely worth a watch as well. We should link to that one too.

Ian: I think we should.

That, I have to say, was a very very interesting Thing.

Ash: Thank you, Ian.

Ian: I enjoyed that.

Ash: Me too

CHAPTER 3: INTERLUDE: FAST SHOWS AND WONDERWALL @28:13

Ash: Right then , sorry, I'll just clear my throat as well

Ian:

Ash: I've actually been a bit... ... a bit growly.

Ian: I was just going to edit those together into the episode, the throat clearings.

Ash: Just me going

Reminds me of the Fast Show, you know... Bob Fleming?

Ian:

Ash: sorry, I know it's not the nineties anymore, but sometimes it's hard to get over it. Isn't it - you know what I mean?

Ian: yes. Yes. I was listening to some live music, and the introduction to one of the songs was: "well, everybody knows this. It's been infused into your brain since you were born." And the song was Wonderwall.

Ash: Yes, it has.

Ian: I just feel very old now because it was quite a long time after I was born, that it started being infused into my brain.

Ash: At the Ilkley food festival, usually they wait until everyone's had a few beers before they go with Wonderwall. It was literally the second song. They were just like, you know that we're going to play this, we know that you all want it

Ian: Let's just get it out of the way

Ash: so let's just do it and then it's done. And then we could all get on with our lives. I quite like that.

So, it's time for Ian's Thing. Ian, why don't you introduce your Thing?

Ian: It is time for my Thing.

CHAPTER 4: THING 2 - APPLE'S CHILD SAFETY MEASURES @29:24

Ian: So I feel like I'm sticking my head a bit above the parapet with this thing, because there are lots of people with very strong opinions about it. A few weeks ago, a gentleman called Matthew Green who's a professor at a university in the US basically dropped the news that Apple we're releasing quote, "a client side tool for CSAM scanning, tomorrow. This is a really bad idea" unquote. So, just to unpack that a bit, the next day after he posted that tweet, Apple released a page on their website.

All they've released so far has been some pages on their website about the topic of child safety, and in it they announced three different things. And there was immediately an outcry among the privacy community. They were quite a lot of very strong reactions from privacy activists who are people who study this and care about it, a great deal. And I thought it'd be quite interesting to unpack it because it started off with a lot of hot takes about how this is the worst thing in the world ever.

And some people are still quite strongly of the view that it's a really terrible thing, but actually digging into it and figuring out what it actually is, is really a big help in forming one's own view about it. So I thought if I go through and describe it a bit, and then maybe we can have a chat about it, or you can just interrupt me during my...

Ash: I'll interrupt you...

Ian: ...my stream of consciousness. So, this child safety announcement basically had three things in it. So, one of them was that there some updates to searching and Siri and things on devices, and they basically provide information and help if in some way, if they detect an unsafe quote unquote situations for children or related to child safety.

And no one really seems to be taking any issue with that, or at least not that I've seen. And then they released another feature, which is entitled "Communication Safety in Messages". If you've got an Apple device, like a phone or something like that, and you're part of a family that's been set up in iCloud so that you can buy things with your parent's credit card, things like that. If somebody sends you a message, in Messages, that on-device machine learning thinks might be, shall we say, inappropriate. Then instead of seeing that image, you'll be presented with a thing that says, look, we think this might be an inappropriate image. You don't have to look at it, we can just ignore it and move on. And the child can then say I want to look at it anyway. And if they say that, and they're under 13, then the response from the app will be well, okay, but if you do, we're going to tell your parents

Ash: Okay.

Ian: ... and if you're over 13, then that, that doesn't happen.

Ash: So just like a pause. If you're over 13, it just says, are you sure you want to do this? Or you say yes or no, basically. And no one will be any the wiser.

Ian: Yes. It's actually 12, not 13.

Ash: Okay

Ian: So, younger children, their parents can be notified if they say yes, I want to look at this anyway. I think people had some concerns about it, maybe. It seems like it is attracting a lot less... ire from people than the third thing, which is what I will now come to, and that is CSAM detection.

CSAM stands for "child sexual abuse material". So, obviously, we're talking about something that's quite horrible and also very much illegal. So Apple's CSAM detection will be rolled out in the United States at the moment. There's no indication of it being rolled out anywhere else, but obviously the chances are that they will expand it as they do other things that they introduce into the US.

So what the CSAM detection feature actually does, is that for images that are going to be uploaded to Apple's iCloud Photos, at the point of upload, your device will hash the image and compare the hash with a list of known existing CSAM material that will be installed on your phone as part of the operating system.

And the result of that match is put in an encrypted envelope that is sent to iCloud photos along with your image. So every image that you upload, at the point of upload, will be given one of these cryptographic safety tokens" I think that they're being referred to, and there is actually quite a lot of information about this.

Now Apple has released a technical overview of it that is quite detailed about how it works. And there is some cryptographic threshold whereby after 30 of these or some threshold number, which I believe is around 30, have been uploaded into your iCloud photos, Apple become able, because you've supplied them with sufficient cryptographic keys for it, to be able to then flag this number of images to a human reviewer at Apple. And if it turns out that they do match the CSAM material, then your account gets frozen and you are reported to law enforcement.

And there's a few concerns with this. So one of them is that governments might pass laws that allow them to force Apple to add other images to the list of CSAM. So for example we know that Apple has had to make a lot of accommodations with the Chinese government and it's particularly beholden to China because it's manufacturing basically occurs there.

They've got something of a weak negotiating position with the government there. For example, iCloud services in China are all hosted in China and the government have access to the cryptographic keys. And that's an accommodation that Apple's been forced into by the law in China. And so if the Chinese government came along in some future where this scanning was rolled out in China, and said "well, we really don't like the tank man image from the Tiananmen Square massacre". We need to know if people have that in their iCloud photos, then that would obviously start to be a very tricky situation

Ash: the tool is agnostic of the topic, isn't it?

Ian: Yes, and I think that's a really key point to make,

Ash: The tooling - it's being applied in a child safety context, but it's, that's not what it's about. It's about recognising... it's about comparing the hashes of images. That's what the tool does. All the process around it is the, is what happens is the effect on child safety.

The tool is agnostic of child safety.

Ian: Exactly. And I think, a lot of concerns that privacy activists have got about all sorts of these kinds of things is that governments often hoist the twin spectres of child abuse and terrorism, which we can both agree are horrible things, but they attach these agnostic, as you put it, intrusive surveillance technologies to these things. And then they conflate the argument of "These technology things are bad", to "You must therefore support terrorism and child abuse".

Ash: Yeah.

Ian: And that is a of logical fallacy, I suppose, that you see a lot of, when you look into the relationship of government surveillance and technology together, you see a lot of this and one response to this CSAM matching thing that Apple are doing is that actually, if you upload your photos into various other providers like Google and Facebook and all the rest of them, they are all routinely already scanning photos for these matches. And Facebook, I think reported something like 220,000 cases last year. You could argue that the scanning is actually... everybody's doing it already, so why are we having a go at Apple for doing this same thing? And the answer that comes back to that is that actually that's being done on cloud servers and what is offending, I think, people's ethical stances on these things is that actually, this is an example where your device is doing it.

So your device is the one that's making the match and then encrypting it and uploading it. And so people are using phrases like "Apple is scanning your phone for materials" and all this kind of thing, which sounds, I think, actually worse than what's actually happening.

And Apple have had to roll out a lot more explanations, and "this is what we're doing", and "this is why we're doing it" and "why would you think that?" interviews. Because actually it's a pretty grey area.

Ash: Yes, yes. And as you said about Apple's manufacturing base being in China, they are influenceable.

Ian: Yes. The way that they've made it work, it does make it tricky for governments to do this kind of stuff.

Ash: So big tech solutions to nuanced problems are often, using the nuclear warhead to hammer the nail in.

Ian: Yes, because everything they do has to work at such enormous scale that it's very hard to adapt to individual kind of cases or situations in a nuanced way.

Ash: Yes, yes, absolutely. So I think in order to, even if their intentions are true in terms of protection of whichever group of people they wish to protect, then we're all part of that process, aren't we?

Ian: Yes

Ash: So basically we all have to use it within those constraints, we all have to use their applications within these constraints. Scanning on the device, I think upsets me less than being than everything being uploaded and server-side scanned.

Because at least then you can turn off the synchronization, if you wish to. To me, my privacy rights feel like they're more protected by client side scanning than server side scanning.

Ian: And I think that's something that's on Apple's mind because at the moment apple could scan your iCloud photos images server side, because while their, encryption is good and they have encryption at rest, they have the keys. One theory that is being talked about is that Apple want to have more protection in terms of end to end encryption where they don't have the keys.

And so this mechanism would still work, even if they stopped having the keys to be able to decrypt your iCloud Photos. So they could actually make those end-to-end encrypted, so that Apple had no access to your photos and this way of scanning...

if Apple decided to extend their end to end encryption, then this mechanism would still work. Which I think is probably something that's on their mind. When people say, your device is being scanned by Apple, that's just really an overstatement of what's happening.

What's happening is that at the time when your photo is uploaded to iCloud, then it compares it to a list and uploads the result of the comparison. But that's not quite the same as scanning your phone. And if you turn off iCloud Photos, no scanning happens at all.

That's something that a privacy activist can decide to do if they don't like this approach. But some people have a very visceral reaction to the idea, their phone isn't entirely doing things that are just for them, and... I guess I kind of understand that, but I don't have that visceral reaction, and it sounds like, neither do you,

Ash: No. ALl of this tech, whether it be Apple or Google or Facebook, we enter into a literal contract...

Ian: That we never read.

Ash: but then there's the social and financial contract as well. For me, it's easier to look at the nature of the company and how they behave, because we know that obviously Google prioritizes having more and more data in order to target things at you, let's say. They can be beneficial as well, but, that's a lot of their thinking.

And Facebook are probably pretty similar as well. They wish to market things to you. That's what keeps their platform going. So I understand the nature of these platforms and what they're trying to do and treat them as such. And with Apple, I think it's similar as well. Apple obviously have their biases, and I will only provide them the data that I wish them to have.

I don't do photo syncing with Apple. You still get to choose. And I feel like with Apple, at least you get more ability to choose what gets harvested from you. Whereas with the other platforms, I feel like it's more done implicitly, whereas with Apple, you have to be much more explicit with allowing permissions to do things.

So the example was the tracking, wasn't it for each of the apps recently. You open an app and it's this app would like to track your journey and it tells you that, and says "you can either say, okay, you can turn it off for this app, but not for other apps, or you can turn it off for all apps and never be asked again, whether or not you want to be tracked just always say no, which, and that speaks to me as that's a good indicator of Apple's intent because they're saying to you, we can ask you this question once, and if you say never ask me again, never track me again, then that's what the device will do.

Whereas I'm not convinced all the other platforms really behave in that way. So I think there's much more transparency with the Apple way of doing things than there is, would say the Facebook or Google way of doing things.

Ian: I think that's right. And I think, fundamentally, Apple makes a very great amount of money by charging a lot for a device. No doubt, it makes huge amounts of money and they are much more expensive than the alternatives. But then equally the alternatives are partly funded by this data contract... this actual and implied contract with Google that you will share your data and photos with them, and they will get benefit from that. They will be able to target ads to you, but they will also be able to use those things for training their machine learning models... they'll be able to understand relationships between you and other pe... there's all sorts of stuff that they can do with that data, that Apple has closed its own doors to not do that.

And I think Apple's overall intent and it's supported by that business model of we will charge you more, but your device will be much more on your side perhaps than maybe some other platforms. Interestingly, the openness of Android means that if you buy a Pixel, you can actually install extra secure versions of Android, that people have developed, that take out a lot of the Google services and things like that.

So if you have a real need for privacy, I think there are people who live in oppressive regimes and all the rest of it, who... Governments always think that people who live in repressive regimes should be allowed to have privacy. But just people that live in their liberal countries should not have it, but you know, people who really need that, it is available and I guess if I was in that situation, maybe I would be looking at an Android phone with one of those, more niche, versions of Android installed on it.

I think on the whole, I trust Apple more than the others, and I have this feeling about Apple being the big baddie. Cause when I... in the nineties, Microsoft was the big baddie, and I remember resisting them fiercely and trying to use Linux for things, and managers going, "what's this open source... bah... you can't be doing that" and all of those discussions. But I remember in the nineties, Microsoft was the evil empire. And I think in many people's way of thinking, especially younger people... people who are the age now that I was in the nineties... Apple is that evil empire.

And I find that quite difficult to internalise because, I think fundamentally, Apple is far from a perfect, no organisation is perfect - certainly a capitalist mega Corp is going to be not perfect. And I've been reading some disturbing stories about the way some people have been treated there, and stuff like that. But very broadly, I feel as though they're probably the least worst of the very big tech companies, the FAANGL,

Ash: It's a question of participation. Isn't it? Rightly or wrongly, the world has been made very difficult to participate in without the influence of these large tech companies. A way to cope is to look at is to look at them and their behaviours, and decide what fits with your own model how you think about the world. If I speak to someone who isn't necessarily in a technology role, and you talk about the balance of convenience and privacy, a lot of people are into the convenience

Ian: Yes, really...

Ash: and could care a lot less about the privacy

Ian: Yes, I think that's right.

Ash: And they literally say things like "I've got nothing to hide."

Okay then! I think sometimes in tech it gets a bit Naval gazy.

Ian: No, that could never happen...

Ash: so I understand the wider implications, but there's also more than one point of view at play here as well. So the child safety, if all these platforms have total privacy and total encryption, then they wouldn't make any money, so they wouldn't exist. And you would have no way of getting a handle on any of these safety issues that exist in the world so...

Ian: No, indeed...

Ash: ...it's completely imperfect. One thing I found interesting, Ian, was... so the actual hash of the image, it could be a partial as well, couldn't it?

Ian: It absolutely is.

Ash: So you could be having your picture taken somewhere and then the tank man from Tiananmen Square is in a picture behind you. That would become, say if our worst authoritarian nightmares come true, and that would get flagged to the relevant authorities and someone would turn up and take you away.

Ian: Well... so that would have to happen at least 30 times before you pass the threshold. And then ...

Ash: Maybe it was in my favourite restaurant or something, you know?

Ian: ...and then a human in Apple would be looking at it to make sure before it was flagged to law enforcement. So actually I think there are reasonable safeguards before you get to there. Yes, I get it. And people have been playing with a version... so the algorithm is very interesting, the NeuralHash algorithm, because obviously you could hash the image file, but then if someone changed so much as a byte it would no longer match the hash. So they've got a thing called NeuralHash, which is really comparing, going into machine learning speak, features of the image.

And so, it's quite interesting to see the hash that it comes up with might be quite similar if the image has been altered or made black and white or, in some way, fiddled with to try and make it unrecognizable. But people have configured GANs - Generative Adversarial Networks - to try and -generate images that match a NeuralHash thing without actually being the image of that thing. So one attack is... we'll make sure your phone ends up with all these pictures on that trigger, this hash even though they're not a picture that is of the those kinds of terrible, illegal things. So I guess it's far from perfect. Apple have seen the examples of this and said, "no, this is not the same as our algorithm", there is a slippery slope here, and it's not perfect.

And I think Apple has put in place reasonable safeguards, but there are all manner of things to worry about in it. Maybe... they just rolling it out with this comparison list of image hashes that come from a child safety organization in the US and it's just being rolled out to the US and they've got all these safeguards in place, but you have to ask yourself... how steep that slippery slope is, and they have made it hard, I think for governments to abuse it, but it still doesn't seem impossible.

Ash: The tool is agnostic of the topic. Apple is a company and therefore not incorruptible if they're threatened with lack of access to a market based on the conditions of this tool and what it looks for. So when you decide whether or not you wish to engage with it, keep those two things in mind.

And also the fact that even though it's a nuclear weapon to solve a scalpel's problem, protecting vulnerable users on the internet and vulnerable groups on the Internet is really important. So balance those two things in your head and say what does this mean to me? How does that appeal to, to my sense of values and then make your decision from there. If you want ultra security, like you say, you can go get yourself an Android phone, which is not built by Google and then install super secure versions of Android on there, and then get yourself messaging applications, where you have the keys to decrypt your message because they exist! All these things are available to you. I would say also to the average non-technical person, that particular world is not that available to them.

Ian: No, and they probably wouldn't like being there either. Because it would be inconvenient.

Ash: Absolutely. So I think that also in tech, we also need to consider, that we're not the only people who are going to be using this.

Ian: and not everybody has a choice.

Ash: Absolutely. People have made the convenience and privacy trade-off they've made it and rightly or wrongly, they accept it. Whether or not you agree as a technology security person or not, they've made it.

Ian: I think that there are some, there was a really great nuanced discussions going on about this and also a lot of frothing rage.

Ash: Yeah.

Ian: We'll include some links in the show notes to, some of the discussion that goes on. I've actually made quite a lot of notes myself, maybe I'll just share the whole into the show notes because as you can possibly tell it's very deep and a confusing topic, and it would be very easy for me to make a mistake or say something wrong, so we'll include all of that in the notes.

Ash: Well, with many of these things just look at your own values and see what you want out of it and whether or not you agree with it rather than... the problem with hot takes is that they're either far too hot as in far too soon and non nuanced takes and so they're either overly enthusiastic or overly negative about a particular change.

Whereas this one, I agree Ian. I think there's a lot of nuance to it...

Ian: absolutely

Ash: ...and I prefer to make up my own mind.

Ian: I shall now wait for the people with the torches and pitchforks to show up outside my house.

So that's two Things.

Ash: Two things, two great things. I really enjoyed that.

Ian: Yes. Yes, I think we might get some feedback for this one.

Ash: I'm trying to put more enthusiasm into my voice as well. After the weariness comment in the five star review. I'm not weary, honestly...

Ian: I am! I mean...

Ash: I'm not weary of the world software development. No no... I'm not overly cynical about the shabby mundanity of it all.

...

No comment from Ian, there...

Ian: Yes, No comment from Ian there!

CHAPTER 5: OVER THE EDGE OF INTERESTINGNESS @52:48

Ash: Are we going to do the live stream thing or?

Ian: Oh yes. That's a really good thing to remember. It's almost as though we'd made a list in order to remember the things

Ash: it's almost like I'm looking at the list

Ian: I can talk or look at the notes, but not both at the same time.

We have a capability in our software that we use to record these episodes, to live stream them. And so we just wondered if anyone would be interested in us doing that. And if you are, tweet to us, and then we will take steps to try and live stream the next episode where it makes sense to do so anyway.

Ash: Yep, you get to see the as-it-happens, how all the opinions are formed, spilled,

Ian: Edited out!

Ash: ...and edited out, into the smooth sounding podcast that you hear.

Ian: We like to think it's smooth sounding...

Ash: yes.

Ian: So let us know if you would be interested in, I was doing That I don't want to do it just for the sake of it, but if people are interested, then it would be worth putting in the additional labour to make sure that people know when we're going to do the recording so they can join us.

Ash: That would be awesome.

Ian: That *would* be awesome.

No heckling will be allowed. Obviously any heckling will be subject to immediate termination of your connection,

Ash: Yes, you'll be muted. And we won't look at the chat

Ian: yes. We'll plow on regardless.

Ash: In our zero feedback environment

Ian: Yes... yes.

Ash: To be fair, a five star review is the kind of feedback that I'm into. So, if you've got anything to say, leave a five-star review.

Ian: Yes! We've got a web page at https://WhatALotOfThings.transistor.fm - you can subscribe there on pretty much anywhere that you listen to podcasts. Is our website interesting for any other reason? No, not yet...

It's got all our episodes on, and little bios and things like that.

Ash: That's interesting enough...

Ian: Possibly, we don't want to push it over the edge of interestingness into whatever's beyond...

Ash: Into the Trough of Disillusionment.

Ian: Yes. That. That trough of disillusionment.

Ash: Yes... been there.

Ian: So thank you for listening.

Ash: Yes, thank you very much, everyone.

Ian: We've got more things, so we will be back, but I'm not saying when, because...

Ash: ...because we don't say when!

Ian: ...we all know that leads to disaster... well okay, slight embarrassment.

Ash: ...that leads to a two year lead time.

Ian: Yes. I've learned my lesson. Okay. See you next time!

Ash: See you next time.

Creators and Guests

Ash Winter
Host
Ash Winter
Tester but not a quality engineer. Talks about testability.
Ian Smith
Host
Ian Smith
Happiest when making stuff or making people laugh. Tech, and Design Thinking. Since 2019, freelancer and FRSA.