A nice writeup and demonstration video from Duo Sec showing some problems with PayPal two-factor authentication.
We developed a proof-of-concept exploit to leverage this lack of 2FA enforcement, interfacing with the PayPal API directly and effectively mimicking the PayPal mobile app as though it were accessing a non-2FA account. The exploit communicates with two separate PayPal API services — one to authenticate (only with primary credentials), and another to transfer money to a destination account.
It appears that the PayPal mobile app authenticated to the back-end API, and received a valid session token, along with (for two-factor-enabled accounts) a flag indicating that the account was 2FA-enabled. Of course, at this point that was irrelevant since the back-end has already accepted and confirmed the authentication request, even without the two-factor interaction.
Duo were able to create a nice Python script that exploited this vulnerability, to log in and send money, all without triggering the two-factor verification. (They still needed the user’s original userid and password, though).
It looks like it took a while, but PayPal were able to roll out most of the needed mitigations, though some issues may still remain.
Check out the link for a great writeup and nice video from @quine.
It seems Apple has made the prerelease Configuration Profile Key Reference available to the public. This the technical documentation for much of the iOS and Mac enterprise management capabilities Apple makes available via MDM vendors, Configurator, etc. (The other main document, the MDM Protocol Reference, remains behind the developer site authentication wall.)
Most of what we think of as “part of MDM” is really just the configuration settings that MDM can push out. This reference (expanded last year to include OS X) includes all the publicly known settings that can be configured via a profile. These profiles can then be installed on a device via USB, HTTP, or MDM.
The linked article includes a preliminary list of new changes, but as noted, we are still early in the beta process and all this could change before iOS 8 is released this fall.
A few days ago I commented on the iOS malware situation. One might sum it up as “fanboys smugly assert there is no iOS malware; anti-fanboys smugly point to this list as proof that the fanboys are idiots.”
Then not three days later, American Banker posted an article about Svpeng, an existing trojan that’s been making the rounds in Russia and is now hitting US users.
What I found most interesting about that article is this: Not once do they mention the platforms affected by the malware. Hell, even the Kaspersky press release is coy about it, only using the word “Android” once, and that in their formal name for the trojan (Trojan-Banker.AndroidOS.Svpeng.a). This trojan seems to have been out for almost a year, but now that it’s hitting US users, Kaspersky is putting on a full-court press in the…er…press… (I should really steer clear of sports analogies). Predictably, I’ve had customers anxiously asking about the trojan, and whether they or their customers should be concerned.
A quick Google search on the trojan’s name nets 1 scan and 1 summary report from Virus Total, and 9 (mostly breathless) news reports about this horrible new scourge that the banks can’t do anything about. Most of these seem to be regurgitated press releases or wire reports, with no useful details at all. And, again, most of these don’t mention what platforms the malware attacks.
So what does this thing actually do? I found a mention on Emerging Threats from January, and some more details from Kaspersky from last November, but thus far, it’s damned near impossible to figure out just how this spreads, let alone how to block it.
What I wish Kaspersky had put in their press release (which would’ve percolated to many of the hundreds of articles simply repeating their information) was:
What kinds of phones does this attack? (seems to be Android only)
What version of the OS is affected? Rooted devices only? (I don’t know)
How does it spread? (Text message? Web links? Infected apps in Play store?)
Can it be blocked with AV or other software? (some say yes, some say no)
Once infected, can it be removed? (Kaspersky says no)
Once activated, can a device be saved without paying ransom?
and so on.
If the folks discovering, naming, and alerting the public about the malware (Kaspersky), and the relevant industry-specific press (American Banker) can’t explain the problem in useful terms, should we really be surprised when mainstream press can’t do any better? (also, 10:1 this is on all the morning shows by the end of the week). Which means that soon, everybody will be asking about it, and worried about their data (and their money).
I’m just afraid that very few of us will be able to answer those worries with anything approaching a useful response.
A short blog post is making the rounds on Twitter this morning, aiming to burst the myth that “malware for iOS doesn’t exist.”
With our FortiGuard Labs reporting that 96.5% of all mobile malware is Android based it would be easy to see why someone might opt for an iPhone. But, users beware. Don’t write off iOS as the secure alternative to Android just yet! Despite, Android malware being nearly an epidemic, or as Tim Cook referenced, “a toxic hellstew”, iOS is not immune.
I’m not a malware expert, and at the (substantial) risk of being (further) branded an Apple Apologist and/or Fanboy, let’s review their list.
Collects all SMS messages
User installs via Cydia
Collects SMS, call, URL, GPS data
Requires physical access
POC - Worm
Worm - via default SSH password
Steals SMS database
Worm - via default SSH password
POC - Retrieves private data via APIs
User installs via App Store
Calls premium phone number
User installs via App Store
User installs via Cydia
Asks for acct info, spams friends
User installs via App Store
(no iOS details given)
Hijacks ad clicks
Steals AppleIDs from SSL traffic
The blog post lists 11 separate cases of iOS malware over a 5-year span (but strangely, doesn’t include Charlie Miller’s POC). Of these eleven cases:
Two are proof-of-concept demonstrations not released in the wild
Eight require a jailbroken device
Five must be manually installed by the user
One requires physical access
One is actually advertised as malware (it’s explicitly a keylogger)
None of the three items distributed through the App Store modifies the device’s operating system
Dropping the POCs gives us 9 items. Dropping the explicit keylogger (the user clearly knows what they’re getting when they ask for it), we’re at 8 items. Of the remaining 8 items, 2 are valid application using published Apple APIs. These two were pulled from the App Store, and the APIs they utilized now prompt the user when accessed by a 3rd party application.
The remaining 6 malware items only affect Jailbroken devices, and of these, at least one (possibly two, it’s not clear how MobileSpy is installed) must be explicitly installed by the user via Cydia. Some of these appear to be simple spyware, while a couple are more dangerous malware, especially SSLCreds (also known as Unflod Baby Panda):
Trapsms (steals SMS data)
MobileSpy (collects SMS, URL, GPS, and calling data)
AdThief (replaces device ID info to steal revenue from ad usage)
SSSLCreds (intercepts SSL communications and steals Apple IDs and passwords)
What conclusion can we draw from this? That some kinds of malicious activity can slip through to the App Store (especially if we consider Charlie Miller’s POC) but, to our knowledge these have been found and removed quickly, and the underlying weaknesses in the operating system have been addressed. However, all of the remaining seen-in-the-wild instances of malware require a jailbroken device, and possibly direct user action to cause the installation or spread of the malware.
Is it surprising that malware can infect a jailbroken phone? Hardly. In fact, I’m actually kind of amazed that we’re only able to identify six instances of such shenanigans.
Will a fully-patched, non-jailbroken iOS device ever be susceptible to more “traditional” malware, that installs and spreads without the user’s knowledge? Possibly. The vulnerabilities exploited by the Jailbreak Me tools could certainly have been used by malware authors, though Apple patched both of these vulnerabilities within days of their becoming public knowledge.
The bottom line here, to my mind, is this: If you do not jailbreak your iOS device, you’re very well protected against malware, and though some things slip through, Apple has been doing a pretty good job of removing such items once found, and further strengthening the system against similar future attacks. I’d be cautious in pointing to this latest list as proof that iOS is just as unsafe as any other platform, because I really feel the evidence suggests otherwise.
[Full disclosure - I violated Betterbridge’s Law of Headlines when I titled this post “iOS Malware - A real problem, or just FUD?”. My apologies. A less click-baity title is now in place.]
One of the things I was most looking forward to with my new iPhone 5S was faster switching between applications. It seemed like my 4S always took 5-10 seconds to toggle between two programs, even simple apps. Jumping from Angry Birds to YouTube (to see what I’m doing wrong) and back again was agonizing.
Unfortunately, though the 5S is significantly faster, switching has in some ways become worse. The slowest reloads are faster, but I feel like I need full reloads more frequently. I’m convinced this is simply due to memory usage. Many applications (especially those with lots of full retina artwork) are taking the device to its RAM limits. No amount of new processor power can mask the fact that there’s just not enough memory in the device.
So it occurred to me a while ago — what if Apple can come up with a good way to load a program in memory in small chunks? Last year at WWDC, they showed off all kinds of processor and OS tricks to make OS X as power-efficient as possible. Could they do something similar for memory?
Rather than loading an entire application into memory at once, could iOS offer a way to let programmers break their apps into smaller chunks that are loaded only when needed? Kind of like the time-based grouping of operations in OS X, only focused on memory rather than CPU usage. Why should the code for managing app settings be in memory while the user is off shooting zombies? Do you really need chart generation code when we’re just editing text in a word processor?
If we’re able to do something like this, then all kinds of new things suddenly become possible. The oft-rumored “side by side” application usage could be a possibility, for one. I’m not sure that’d work otherwise, as the apps just seem to need too much RAM. Of course, Apple could double the memory in the devices, but then apps would just expand to fill that size as well.
Another thing that might then be possible is sort of an “iTunes Match” for applications. The OS would keep frequently used applications on the device at all times, but only “install” rarely used apps on demand. If you’re only downloading bits of an app at a time, as needed, then this might actually be feasible.
It’s interesting in a way — some time ago, there was talk about “Just In Time” delivery of applications over the internet. It’s possible this was part of the promise of Java — I may have blocked the precise origins of the concept from my memory. I distinctly remember thinking at the time, hell no, if I buy an application I want a copy of it locally, all the time. That I wouldn’t trust that I’d be able to retrieve my applications over the net like that.
Well, guess what? We skipped right over JIT applications and put all our data in the cloud instead. Kind of funny, when I think about it.
As long as I’m rampantly speculating (on the day that WWDC opens, no less — I really should get these thoughts out when they happen and not months later…), let’s take this line of thinking to its logical, yet absurd, conclusion. If all our data is on the cloud, and if all our apps can be downloaded from the cloud on demand…why do we even need “our own” iPads? Just authenticate yourself to the iPad, and boom!, it looks like your iPad, with all your apps and data coming down when you ask for them. Done using the iPad? Sign out, and boom! again, all the data’s gone.
Of course, Apple has not shown any indication that they want to follow the concept of “logins” on an iPad, but it’s an interesting thought anyway. Go to Starbucks, borrow one of their store iPads, work with it as if it’s your own, then log out and leave it for the next user. Apple’s already toyed with some of this, as you can configure an OS X machine to allow guests to log in using Apple IDs (though it’s obviously not doing this data / application magic at all). Also, something akin to this was always hoped for with NeXT machines (though in that case, you’d carry your computing world around on a 256MB magneto-optical disk, which was damned cool for 1988).
I understand that this sort of change would be difficult to achieve, and possibly impossible in practice. But it’s certainly interesting to think of. And, putting aside crazy ideas about “logging in” to a friend’s iPad and replicating your entire environment on the fly, I still think that something like this would be very useful (and may even be required) for making apps much more responsive, especially when switching between applications.
And we could even use the iWatch and BTLE for seamless authentication!
I’ve been occasionally using a VPN that requires a Google Authenticator code to connect. I say “occasionally” because it’s a pain to use — I have to launch Tunnelblick (the VPN client I’m using on my Mac), then get the VPN password out of my password manager and paste it in, then open my phone, launch Google Authenticator, and enter the displayed tokencode next to my password.
It’s not horrible — but it’s awkward enough that I find myself looking for ways to avoid using this particular connection. Then the other day, a co-worker suggested using a script to dump the credentials into the VPN config on the fly and re-launch. And so, my lunchtime project was decided for me.
Tunnelblick won’t let you write credentials to the configuration file, but it will happily pull them from the OS X Keychain. So now I just need to find a library to write to the keychain. Turns out that’s easy, too — there’s a command at /usr/bin/security that does exactly what I need. Now I just need to make it look pretty.
And that’s what I’ve got now: “gtb” — Google(auth) + TunnelBlick. This script:
Prompts you for your base VPN password and Google Authenticator Key
Writes them to the keychain
For normal use:
Reads the password and key from the keychain
Computes the current Google Authenticator tokencode
Writes to the keychain entry Tunnelblick uses for the VPN password
Launches Tunnelblick and opens the selected VPN
Delighted (or maniacal, your choice) laughter is left as the responsibility of the user.
How does it work? Well (after setup) the first thing we need to do is read the data from the keychain. For a Tunnelblick VPN called “MyVPN” this can be done with the aforementioned security command:
The important bit is the last line, where the “secret” stored in the keychain entry lives (it’s always called “password,” no matter what you store there). My script reads that entry, splits on the ‘:’ into google_key and vpn_password fields, and then goes on to compute the tokencode.
Treats the key string as a Base-32 string and decodes it into a binary key
Computes the current timestamp, based on the UNIX epoch, to a 30-second accurracy (that is, the timestamp number will increment by 1 every 30 seconds).
Computes the HMAC-SHA1 of (key, timestamp)
Reads the last byte of the resultant HMAC digest, and uses the lowermost 4 bits of that character to select an index into the digest
Reads the four bytes from the digest, beginning with the index, as the base for the token
Strips the most-significant bit off that 4-byte word and reduces the result (using modulo arithmetic) to a 6-digit number
Returns the number (padded with leading zeroes if necessary)
One interesting problem I ran into: The Google key was 22 letters long (after stripping spaces). All the examples I could find online showed only 16 letter keys, but clearly this longer key worked fine in the iPhone application. However, it wouldn’t work in my script — I kept getting errors in the Base-32 decoding. Padding the string with additional “A”s worked — so my script takes the provided key string and tries that until it produces a valid Base-32 decoding.
There’s a great library that does all this on github (thanks, Tom Jaskowski), but I simply extracted the part that I cared about and incorporated it directly into the script. Here’s what it looks like in Python:
key = base64.b32decode(key_str) # the authentication key
num = int(time.time()) // 30 # epoch time to 30 sec
msg = struct.pack('>Q', num) # pack into a binary thing
# take a SHA1 HMAC of key and binary-packed time value
digest = hmac.new(key, msg, hashlib.sha1).digest()
# last 4 bits of the digest tells us which 4 bytes to use
offset = ord(digest) & 15
token_base = digest[offset : offset+4]
# unpack that into an integer and strip it down
token_val = struct.unpack('>I', token_base) & 0x7fffffff
token = token_val % 1000000
return "%06d" % token # pad with leading zeroes
Once the tokencode has been computed, it’s appended to the base password, and written back into the keychain using security:
/usr/bin/security add-generic-password -U -s Tunnelblick-Auth-MyVPN -a password -w myVPNPassword123456
Then we use a little Applescript magic to launch the right VPN connection in Tunnelblick:
echo 'Tell app "Tunnelblick" to connect "MyVPN"' | osascript
…and that’s it!
Security of Keychain Items
A quick note on the keychain items themselves. The security application, by default, can access these items without prompting the user for their password. This means that, if you leave your desktop unlocked, anyone could walk up and extract the VPN credentials with a couple quick command line calls. So it’s best to open up the Keychain Access application, find the VPN’s “password” and “auth-data” entries, and secure them. Do this by removing the “security” application from the list of apps which can access the data at all times. (Leave Tunnelblick authorized to read the ‘password’ entry so it can launch more smoothly). You’ll also want to set the “Ask for Keychain Password” flag. Then, when you run the script, the Keychain will prompt you for your login password (to access the auth-data entry), and after that everything will happen magically without further intervention.
If you’d prefer to not store the Google Authenticator credentials in the keychain (but rely on the Google Auth app on your phone, for example), then enter “none” when settng up the Google Authenticator key, and the script will prompt for the current tokencode when it is run. Otherwise, technically, we’re kind of eliminating the “2” from “2-Factor” authentication.
I’ve posted the entire script as a gist on Github. I hope people find it useful, but remember, this is a hack written over lunch one day (and cleaned up during free moments over a couple days following). It might work perfectly for you. It might not work at all.
The basic concept should be applicable to other situations — I’d bet it could be changed to work with Viscosity (or other Mac VPN clients), or with other OTP codes, with minimal effort. But I use Tunnelblick, so that’s all it supports at the moment.
I’m seeing quite a few stories this morning (really, it started yesterday afternoon) about iOS users in Australia getting their devices locked out with a $100 ransom message.
It’s unclear at this point exactly how this is happening, but it seems evident that the affected users are having their Apple IDs hacked. Typically, such hacks involve things like weak passwords falling to brute force attacks by a botnet or falling for a phishing attack. That doesn’t really explain the fact that all the affected users appear to be located in Australia, however. Perhaps the most likely possibility is that an Australian e-mail provider has been hacked, giving hackers the ability to reset the password of weakly-protected Apple IDs associated with those e-mail addresses. Regardless of how it’s happening, though, those Apple IDs are being compromised.
So far this morning I haven’t seen anything definitive, but an Apple ID password reset email hack seems a reasonable presumption. Adding 2-factor authentication for all your Apple IDs has been a recommendation for a while now, and this story kind of makes that even clearer.
Another interesting point: If your device is already locked with a passcode, the remote attacker can’t change it — so they can’t lock you out and demand ransom. Of course, they could simply wipe it instead, out of spite.
And after hacking your Apple ID they can (possibly) buy things at the store using your credentials, and certainly delete information from your account (contacts, calendars, files) or just generally make your life miserable.
Bottom line: It remains important to select strong passwords (so they can’t be guessed) that aren’t reused (so one compromise won’t break your other accounts) and using 2-factor authentication (so they can’t just hack your email acocunt and send a password reset). And when setting up 2-factor authentication, if you’re given a “master reset password” of any form, be sure you retain that somewhere safe. It’s even a good idea print it out and store it in a couple places at home (like with your passports and other important legal documents) (just don’t keep it in your wallet).
A quick take on some of the many issues backing up large, complicated iOS configurations to iCloud backup.
Apple needs to make some hefty changes to iCloud. It needs to allow you to backup and take advantage of the full size of your iPad (128GB max) inexpensively. In my case, since most of my data is strung between four cloud services (OneDrive, Dropbox, Google Drive, and iCloud), I didn’t need to back it all up. However, for a while I used iBooks to store my side-loaded ePubs and PDFs. In this case, I had to stop backing it up to iCloud because of space concerns. Loosing all of those in the case of a system failure led me to put my PDFs back on Dropbox and use Goodreader to access them. Being able to keep them all in iCloud (about 50g of data) would be very nice.
That the default iCloud backup space (5G per account, not per device) is insufficient has been pretty well trumpeted for years now. But many of the other issues here, especially the logistics of restoring a lot of apps at once, don’t get mentioned as often.
Backing up to iCloud seems like it should be perfect, but if restore is so difficult, slow, and unpredictable, then it’s really not viable. It seems that the “proper order” of file restoration isn’t guaranteed, and some apps (1Password, in this example) may break until the restore is complete. It almost feels like you have to start the restore, and then not use the device at all for a day or two until it finishes (and you’ll have to kick it every now and then to keep the process moving).
I’ve avoided going to iCloud backups, partially because of the space problem, but largely because I don’t understand it. What’s really backed up? How? How easy is it to restore, and can I restore just parts of it? What happens if the backup breaks?
For a while I was doing some crazy script-fu to backup my iCloud calendar to a bunch of .ics files on the desktop. I may need to revisit that, and extend it for all the other data types stored in iCloud.
As for full-device backups, I’m still holding out hope that I can get iTunes sync over Wi-Fi to work at home. But after multiple upgrades (Lion to Mountain Lion to Mavericks) and attendant iTunes upgrades, it’s still only working maybe 60% of the time, and that for only one of the many devices we have at home. I’m at the point where I may just use an AppleScript to kill and restart iTunes each night, and to explicitly initiate a backup of a different device each day.
Nice bit of data (and a link to a script) that shows roughly how much of this person’s email ended up on a Google server at some point.
For almost 15 years, I have run my own email server which I use for all of my non-work correspondence. I do so to keep autonomy, control, and privacy over my email and so that no big company has copies of all of my personal email.
Since our conversation, I have often wondered just how much of my email Google really has. This weekend, I wrote a small program to go through all the email I have kept in my personal inbox since April 2004 (when GMail was started) to find out.
From eyeballing the graph, the answer to seems to be that, although it varies, about a third of the email in my inbox comes from Google!
I’ve run my own email sever for many years as well, for some of the same reasons, and it’s a little disappointing to consider that it really doesn’t make a whole lot of difference.
I just noticed an interesting bug. I got a SPAM email (which I fortunately get far fewer of today because of SpamHero). As I usually do when a SPAM leaks through, I forwarded it to SpamHero so they can use it to improve their filters.
Less than a minute after forwarding the email, I received another copy of virtually the same SPAM. Dutifully, I forwarded it again, but this time I noticed something strange: Though the Mail application identified the email as SPAM (and thus refused to load embedded images), the email as incorporated into the forwarding message window did load the images.
It’s a commonly-repeated security recommendation that one shouldn’t load images by default when reading email, especially for suspicious messages, as the URLs for those images may be used for multiple potentially nefarious purposes. For one, they could use that to verify “Yes, this email address worked!” and then send more SPAM your way. Obviously we don’t want that to happen.
The irony is that the very act of forwarding the message to the filtering service may in fact be hurting, rather than helping. In this case, the URL was exactly the same in both emails, and didn’t appear to be uniquely created to help track which messages were successfully delivered.
Unfortuantely, I’m not sure there’s an easy way to prevent this from happening (other than Apple changing the app’s behavior).
Ha. From the “Shoulda seen this one coming” department: Sharing a file with another person (via Dropbox, Box, or any other hosting service) may not be as private as you think. Sure, you may have a completely random URL that nobody else will be able to predict. And, sure, you may rightfully trust the people with whom you share the link not to reveal it to anyone else. But if the file you’ve shared contains a link to a 3rd party site, watch out!
Files shared via links are only accessible to people who have the link. However, shared links to documents can be inadvertently disclosed to unintended recipients in the following scenario:
* A Dropbox user shares a link to a document that contains a hyperlink to a third-party website.
* The user, or an authorized recipient of the link, clicks on a hyperlink in the document.
* At that point, the referrer header discloses the original shared link to the third-party website.
* Someone with access to that header, such as the webmaster of the third-party website, could then access the link to the shared document.
This is one of those facepalm moments. Of course this happens. And probably a lot more than we think.
The article (and this related post from Graham Cluley) suggests restricting shared file / folder access to those users listed as “collaborators” for the given sharing service. However, that doesn’t really solve the problem, especially if you need to share the doc with people outside your normal circles.
Better would be a way for the sharing service to request a password from the remote user before showing the file. It wouldn’t be perfect, but it’d definitely help.
I haven’t yet written up this year’s DBIR puzzle, so here’s an article at Dark Reading that neatly summarizes it.
Verizon's earlier contests were mainly cryptography challenges with blocks of cipher that contestants had to decrypt. But the contest has evolved over the years from a crypto focus to more of a mind-bending puzzler. "It's less about someone being an expert in cryptography as it is for someone who is really good at troubleshooting and solving problems... and being really good at puzzles," says Mark Spitler, co-author of the Verizon DBIR and the mastermind behind the cover challenge contest.
One of the reasons I was so excited to get an iPhone a few years back was because of contact management. For years (from 1997 until about 2010) I carried around a Palm Pilot, which had reasonably good tools to synchronize data between the mobile device and my computer. Then I got a “modern” cell phone, which could do text messaging and everything, and setting up data syncing with that was….nearly impossible. But the iPhone, well, most of the time it works just fine.
But then Andrea got an iPhone as well. So now we needed to share contacts and calendar information between both of us. So how do you do that?
This worked out pretty well for a while — we simply used the same iCloud account on both our iPhones (and later, iPad), and now any contact added to one device appeared on all the others instantly. It also synced to our laptop and desktop systems. Yay!
But then Apple introduced FaceTime (2010) and iMessage (2011), which both used your Apple ID as an address. So now we needed to have individual iCloud accounts as well. This is actually pretty easy to set up, though there are a few important limits to keep in mind.
First, only one account can be the “primary” iCloud account on a device. This account will have a few additional features beyond what additional (shared or private) iCloud accounts get. All iCloud storage (key/value data from games, for example, or Keynote documents saved to iCloud, or iCloud backups and iCloud keychains) are associated only with the primary account. Also, Photo Streams, saved Safari bookmarks and Passbook passes will only synchronize among devices with the same primary iCloud account. Finally (and this changed with iOS 7) the Find My iPhone feature only works with the primary iCloud account. So you won’t be able to use a single family account to track the location of every device in the family.
Most of those limits are fairly reasonable (though the Find My iPhone limit is still a bit annoying). So how do you set this up?
On an iOS device, launch the Settings application, and go to iCloud. On OS X, the same settings are found in System Preferences, also under iCloud. This is where you will enter your primary iCloud account information. If you don’t already have an Apple ID, you can configure one here, or it may be easier to create that from the desktop by going to appleid.apple.com. Once the account has been set up, select which features you want to use. I use just about everything except Keychain, Backups, and Mail (most people will probably have a dedicated email account separate from iCloud).
You’ll probably want to enable this same iCloud account for the other “personal” features on iOS (and OS X, where appropriate): iMessage, FaceTime, and Game Center.
Now that you have your personal iCloud account configured, you need to create a shared account for the family. Go back to appleid.apple.com and create a new account, then on iOS go to Settings: Mail, Contacts and Calendars (or System Preferences: Internet Accounts on OS X) and enter that account as a new iCloud account. Select which features you’d like to share (we share Contacts, Calendars, Reminders, and Notes), repeat on all devices which need access, and that’s it!
Well, almost. Because now we’re at the point where all three of our kids are capable of sending messages to one another. And to do that, they need to know everyone’s email addresses, too. No problem, just add them to the shared account!
Well, almost. Except that there’s no way to connect to an iCloud account as a “read only” user. So there’s the (somewhat real) chance that they might accidentally delete one (or more, or most) contacts from the system. This is easily solved with a “kids” iCloud account. It’s just like the main family account, but it’s shared only between the kids’ devices, and if they accidentally delete things, well, that’s not a huge deal. We also stripped out all contacts except immediate family and close relatives (‘cause we also don’t want them texting “POOOOOPY BUTTTT!!!!!” to our friends…)
And, finally, all these iCloud accounts are separate from accounts for the iTunes Music Store (and App store and stores for Videos and iBooks). That’s separate, and currently shared across all our devices for Application and media purchases, though we’re running into some issues there as well (that’ll be another blog post).
So now we have something that looks like this:
A couple of problems remain:
I haven’t found a good way to copy information between iCloud accounts, at least not on iOS. So when we want to put something in the kids’ shared account (like Grandma’s phone number or the schedule of soccer games) we have to manually copy that on an OS X machine.
Be sure that each device has a local entry for that device’s owner. By Local, I mean local to the device (not in iCloud), or in the personal iCloud account, but not in the shared iCloud account. This is the account that you’ll set up in Siri as “me.” If you don’t do that, then Siri will get all confused when you say things like “Tell mom I’m heading home.”
Take care when setting up preferences that the “Default account for new entries” is set properly (where “properly” is however you’d like it to be, just make sure you remember what you selected). Because it’d be awkward for something like “Buy new Lego Wii game for kids’ birthday” to show up in the kids’ reminder list, just because Siri thought that the shared account was the right place for new items.
The limitation for Find My iPhone is still kind of frustrating. If one of the kids misplaces their iPod, for example, we have to log into the website using their account, then send the “PING!” message to help us find it. Not a terribly huge problem, but occasionally annoying nonetheless.
Overall, though, this arrangement has worked pretty well for a couple of years now. It’s quite a bit different from how we synchronized contacts between Outlook an a Palm Pilot 1000, and I’m quite glad for the progress we’ve made.
Wow, I haven’t posted here in nearly a year. For a while last year, I was experimenting with a second blog where I could post quick little blurbs, links to interesting current events, but that never really took hold. I think part of it was it was just too much of a pain to post the little things. Plus, I don’t know, somehow I just got busy or somehting last summer.
So now it’s time to start again. I’ve changed over the blog system to something simpler than the WordPress system I was using before — Marco Arment’s Second Crack. It’s cleaner, a lot simpler, a whole lot faster, and also lets me easily publish things like slides, papers, or other media. It’s also got a simple applet that I can use to post “link posts,” and I’ll try again to start pulling out interesting looking things for comment.
Unlike WordPress, this site won’t have comments (I never got many anyway), nor does it have a “subscription” option (for the 3 or 4 people who received copies of posts in email). A RSS feed is available, and hopefuly I’ll have automatic tweeting of stories working as well.
Soon, hopefully, I’ll be able to put up a writeup of the 2014 DBIR puzzle (which I and Alex Pinto won first place on as a team!), and then maybe I’ll look for other puzzles which I haven’t written up yet, and talk slides which haven’t been posted. I’ll also keep hacking the Second Crack engine (I’ve got a short list of features I’d like to add).
So for the few of you who read the silly things I’ve posted here, thanks for reading, and hopefully soon you’ll have more content to wade through. :)
Sometimes, the names of past Wi-Fi networks your iOS device has used get broadcast to the world as the device tries to find someone to talk to, and this can (possibly) leak information about your favorite home or work networks. So it’s a good idea to delete these networks whenever you’re done using them. Unfortunately, here’s no way to remove Wi-Fi networks from the “Preferred Network List” (PNL) on iOS, unless you happen to be in range of that network at the time. Then, and only then, do you get the option to “Forget this Network.”
So now there’s iStupid, the “indescreet SSID tool (for the) unknown PNL (on) iOS devices.” It beacons out whatever SSID you want it to, so that your phone will think it’s nearby and let you delete the network from the phone’s database.
A nice little trick…be even cooler if it could detect the aforementioned SSID pings and automatically beacon them back, to essentially “auto-discover” the networks your device knows about.