Dropbox's background update is using excessive CPU on my Mac

I often keep my computer on in sleep mode and return to it whenever I need to. Sometimes, resuming it results in a slow down of other processes. For my retina MacBook with a 2.8 GHz Intel Core i7 with 16GB of RAM, I really shouldn't be running into such CPU numbers (unless I'm using a design app).

Naturally, rather than waiting a minute or two for things to cool down, I hit the power button and force reboot the Mac. After that, I noticed my fans whirring up again. Launching Activity Monitor, I found the culprit: Dropbox. 

201.7% CPU usage? That's nuts

Look at the values of the Dropbox update: 201.7 % CPU, tons of CPU time, 14 threads, and 145 Idle Wake Ups, all for an app update! 

I've used Dropbox for several years and I haven't come across this type of bloated activity. What's happening and why isn't this being optimized? I understand the desire to do things silently in order to be unobtrusive (learning from the lessons of Adobe Reader eh?). However, your update is engaging my fans and slowing down the rest of my computer. So much for being out of the way. 

Why did Honda only provide a screwdriver for the 2009 Rebel 250 tool compartment?

That's right. A screwdriver. For a motorcycle. For "maintenance."

To answer the question in the headline: cost savings. More specifically, the screwdriver is made available so that the left side cover could be opened to reveal the user manual in a plastic window. 

To access the compartment, stick your key into the finicky plastic hole and turn right. Be firm but don't be harsh with the turn. The plastic will feel sticky. Once you have it turned, slide and pull the cover off left, down, and out. 

I feel frustrated because this compartment is difficult to open, made out of plastic, has very little space to work with, and doesn't really offer enough space for tools needed in emergencies. Time to stock up with more tools. 

Amazon Echo Look has a good chance of leading the camera-focused and connected future

Okay, let's get it out of the way: Amazon's Echo Look is pretty creepy.

Not just the camera and recording functionalities, but the whole idea of an intimate space being used to market and target under the guise of fashion improvement is pretty nerving. At my daily brand hotsheet meeting, I did an informal review of the device with all-but-me female team at work about the device and was met with disturbed glances.

"That's not really for me."
"I love my mirror and switching my clothes in front of it. I don't really need this."
"I think it's creepy. It's always going to be watching me"
"It's really weird."
"It reminds me of that scene from Clueless!" (I had to look that one up on YouTube).

Nods of approval were in full force as my female team agreed, the Echo Look just felt so out of place. Why do we need this?

However, I was brimming at the excitement of the possibilities.

Not long ago, I bought into the idea of Microsoft Kinect, a camera with 3D sensing functionality that would allow me to peer in and interact with my screen with my entire body.

Of course, the Kinect failed in the market, mainly for the cumbersome nature of it. You needed to have a large, spacious area. It's lackluster accuracy was dependent on several rounds of user calibration. It's microphone didn't pick up voices too well (try screaming Cortana at my Xbox 5 times in a row). It needs a game controller to augment functionality. And it's price, at $150, not including the Xbox powering the device, defeated it from the start.

Yet, here is the Amazon Echo Look, masquerading as a baby Kinect. Except this time, the processing isn't happening on the device. It's on the cloud. Machine learning is powering the camera to blur out backgrounds a la Portrait mode on iOS. AI is making a customer's clothing decision easier and quicker. It's a well designed, portrait device, designed to blend into your room with a white, Apple-like coating on the device. 

What else will the Amazon camera enable?

Amazon's Alexa team is clearly using this device as a way to refine its recognition capabilities. But once it breaks out the mold, developers will get a powerful API to play with. What will they build? I have a few postulations:

  • Pointing a book my friend brought over to my home, "Alexa, can you add this book to my wishlist?"
  • While in another room, a mother asks her Echo Dot, "Alexa, how is the baby doing?" to check on any motion activity on her child.
  • A family hears the doorbell at dinner time and asks, "Alexa, who's at the front door?" Using AI, Amazon can identify the person at the door and turn them away.
  • Amazon has been analyzing my posture every day. I ask it for my health recommendation: "Alexa, how's my body posture?" Alexa responds with a stretching regimen for tomorrow's exercise session to help rectify the posture. It also sends a note to my doctor about my posture issues.
  • I'm in the kitchen, taking a second look at this dish of leftovers from a few days ago. "Alexa, is this food still safe to eat?"
  • I need to quickly scan a document and send it to a friend but I don't want to whip out the printer. "Alexa, send this document to Mohammad"
  • At work, I can conference in our remote team. "Alexa, setup a live feed from the conference room in San Fran." A window would pop-up on our screen with the video chat ready go.

Amazon took advantage of our cloud connected world (hell, they pioneered AWS) and reduced the costs of camera-enabled computing. At $199, you have an always-on, ready to go, accurate assistant who can use its "eye" to simplify and capture the story of your life. 

Why would you get a Kinect? Why would you get a Nest Dropcam when this has more functionality at a similar cost? And if you think Snapchat, Facebook and other tech players aren't watching (no pun intended), you haven't been paying attention.

I love my Echo Dot but I can already see the horizon ahead. Kudos to you Amazon. Your move Apple, Facebook, Snapchat, Microsoft, and Google.

Side note: While the Amazon logo is a ghastly design oversight on the front of the Echo Look, most people won't care. They'll be too busy looking at their own selfies on their phone. Also, how awesome is it that Amazon took advantage of the screen we always carry with us as it's viewport. No need for an expensive LCD when you already have one in your pocket. 

Auto save IDML files when you close an InDesign document

Recently, I ran into a problem with coworkers having trouble editing my InDesign files. I run CC2017 but they often run older versions, anywhere between CS6 and up. To avoid this issue, I often export IDML files.

I like making the computer work for me so I automated the process. I came across this script from fabiantheblind. It saves an IDML alongside your INDD file every time you save the document.

I took the script and modified it so that it would only fire after I closed the file. Since I'm trigger happy with CMD + S, this saves me precious disk writing/CPU cycles not to mention any network hangups when working on an AFP/SMB server. 

To get this set up, create a Startup Scripts folder in your InDesign Scripts folder. You can also use the Install Scripts tool by Olav to simplify the process.

Download the script and copy it to your Startup Scripts folder. Restart InDesign.

Now, every time you finish editing an INDD file and you close its window, the script will trigger and create an IDML. While I've tested this on my Mac, it should work on Windows. 

(sometimes, you may run into a Doc was never saved error. Double check your directory to see if the IDML file was saved regardless. It has worked every time for me so far.)

I've recreated the code below:

// This InDesign Startup Script saves an IDML copy of the doc alongside the INDD
// Save it in your Startup Scripts folder. (See https://forums.adobe.com/thread/588551)
// Thanks to fabiantheblind http://graphicdesign.stackexchange.com/a/71736/67488 for the original script
// Modified by Ashraf (ashrafali.net) for comment & code clarity as well as functionality on close

#targetengine "session"
// Activate a Target Engine to make this work. See https://stackoverflow.com/questions/14061690/what-is-targetengine

app.addEventListener('afterOpen', function(myEvent) {
  // Only run once a document is opened. See https://forums.adobe.com/message/5410190
  if(app.layoutWindows.length == 0) return; // This is here to avoid a run on first start when there are no windows 

  var doc = app.activeDocument; // Get the current doc

  // Switch to the event listener
  // If you want it to work on every save, change variable to afterSave
  app.addEventListener('afterClose', function(theEvent) {
    $.writeln('saving'); // just to see whats going on
    if (!doc.saved) {
      // catch those possible mistakes
      alert('Doc was never saved');
    var aName = doc.name; // get the name
    var newName = aName.replace("indd", "idml"); // replace the INDD to IDML
    // Create a new File Object next to the INDD
    var theFile = File(File(doc.filePath).fsName + "/" + newName);
    // export
    doc.exportFile(ExportFormat.INDESIGN_MARKUP, theFile, false);