The Present and Future of VR

Recently, I’ve had a lot of exposure to virtual reality (VR) through various personal and work-related projects. I believe that it’s poised to become more and more mainstream as the technology becomes more accessible, and as more content becomes available. However, VR hardware is not yet as accessible as mobile devices, and it will be a while before VR applications become as popular as mobile apps. I feel that we are on the verge of a mass VR adoption, similar to a period in 2007 right before mass mobile app adoption.

2016-10-25 19.11.12.jpg

I went to the Franklin Institute in Philadelphia last Tuesday night for their Virtual Reality Showcase. (It looks like they’re adding VR to their permanent displays!) Going in, my feeling was that VR was currently applied most easily to 1) Gaming 2) Medicine and 3) Architecture. I have spent enough time with the Vive platform at home to know what beautiful worlds could already be explored in games, and how long it takes to truly appreciate them. At the VR Showcase there would be too many people to really get immersed, so I wanted to find unique and out-of-the-box applications that are being brewed up instead.

The showcase had a nice variety of experiences like stargazing, exploring a train, and navigating other 3D gaming worlds. Most games on display were not full of plot or action, but rather simple but beautiful experiences to be viewed. At this stage of VR as a technology, the biggest “wow factor” is simply how realistic a world can be. A player can simply put on goggles and sit inside a world for long periods of time and be entertained. Contrast that with the concept of mobile gaming, where instant gratification and throwaway experiences have become the primary driving factor of adoption. Candy Crush works because it takes only seconds to start up and play on a mobile phone. The equipment for experiencing VR has an inherent setup time, and thus the experience must be rich and rewarding beyond a simple instance of gratification.

2016-10-25 19.40.09.jpg

Surprisingly, for most first-time VR players, the most interesting aspects of an experience are often the most mundane. Just being able to turn your head and see different things is the first new experience a beginner will have. Another is the ability to interact with objects through realistic hand gestures. A display by Leap Motion had very realistic renderings of the player’s hand and finger motions, and allowed manipulation of simple geometric objects. Gaming has a long way to go, but any advance in VR gaming is bound to be highly rewarding for players.

2016-10-25-20-47-42

The next big application of VR I was looking for was in medicine and science. I feel like VR technology can potentially be much more beneficial and impactful in this field. The uses at the showcase were primarily for training and visualizing medical procedures. Several different applications were used to visualize brain structures, and helped medical practitioners prepare for medical procedures.

2016-10-25 19.11.41.jpg

There were also several rehab and physical therapy oriented applications, including a company that created both fitness tracking technology and augmented reality accessories. However, most of the displays at the showcase were pre-recorded 360-degree video clips of the tools in action. Although the user can turn their head and see different parts of the video, these were not truly VR experiences. Perhaps the learning curve of using an actual VR medical tool would be too much for a 30 second demo. I think that using VR as a training tool is a great place to start, but there are a ton of uninvented and unvisited possibilities, and the showcase didn’t exhibit any surprising applications.

2016-10-25-19-12-37

Unfortunately, there weren’t any architectural applications of VR at the showcase. I had built an architectural experience in the past for the Samsung Gear VR, and I was hoping to get a more immersive one like the Vive’s IKEA experience. While there are companies that provide experiences to view interiors or traverse rendered buildings, I believe the gold standard will be interactive architectural tools. Through the use of VR, designers and architects could revise layouts, change finishes, and reconfigure positions of whole floors. This is something I’d love to get into more.

Aside from gaming, medicine and architecture, I did discover an interesting new field for VR. Because of its immersive and highly engaging nature, VR experiences are great platforms for storytelling. While creating a whole 3D world is difficult, it is relatively easy to create a 360-degree video using a set of cameras arranged in a sphere. This allows storytellers to create an immersive experience where the user can turn their head and choose what part of the story to view. One company, nothingbutnets, created an 8 minute experience that deeply affected some of the viewers at the showcase. While VR was only a small aspect of their larger philanthropic and socially conscious mission, I believe that it could be one of the most impactful.

2016-10-25 20.39.24.jpg

360-degree video is probably more likely to get mass adoption than fully interactive applications. Compared with designing a game, it’s much easier to film a video and distribute it via something cheap like the Google Cardboard. Such films can easily be turned into experiences that allow people to travel to a remote part of nature, or see an event live, or experience the troubles in a 3rd world country. There are experiences available on the Vive that allow you to explore an Icelandic volcano or a remote glacier. Those nature scenes can really resonate for someone who hasn’t been able to experience it in person. Viewing them in VR made me really want to preserve the real thing for future generations. A VR experience may be a more effective way to get a message across for social projects, and really engage an audience that is jaded from TV commercials and magazine ads.

Finally, there are a few experiences and applications that I hope will come into existence as VR technology advances. The combination of VR and AR could make things much more interesting than either are able to, individually. Mapping a virtual world onto the real one could have many applications. Imagine using the Microsoft Hololens being used in navigation to point to the actual road you need to take at a confusing intersection. Or a virtual dressing room that allows you to try on clothes that fit your real body type. In medicine, live 3D visualization of a surgery along with tools with haptic feedback could make remote surgery, or microsurgery, a much easier process. Finally, for gaming, I think that if an omnidirectional treadmill can be implemented, the combination of that and a wireless headset could cause people to be immersed for hours inside a fantasy land of their choosing.

Right now VR has too many possibilities but also a lot of technological limitations, but I believe that I need to be involved in VR in order to use it to help the world.

Thoughts from a meetup

Today I went to a iOS programming meetup (Philly Cocoaheads, in case anyone was wondering) to hear a colleague give a talk and also to just get back into the technical community in Philadelphia. It was kind of what you’d expect; a lot of techy, curious, somewhat entrepreneurial, half expert and half student coders. Everyone knew a lot, and everyone had a lot to learn But it was a comfortably geeky atmosphere where people could nod at the familiar and gawk at anything new that was presented. Then, at the end of the whole meetup, we had ended a little early and weren’t ready to go out to a bar yet, so people were about to shoot the shit for another half hour, when a relatively out of place newcomer stands up and announces that he wanted to talk.

There was definitely an air of hesitation because it seemed that this was definitely unscripted. And the organizers were not going to stop the guy from talking, but were also quietly anxious about who this guy might be.

Turns out, he was a veteran who wanted to get into the app business, and had a lot of grand ideas for building a great product to fund a charity for veterans. Definitely not an idea to be dismissed, but also definitely not an instant hit. The crowd was polite yet uncomfortable as he talked. The pitch was directed toward an audience who may not be the exact tech crowd that was there. Because he started talking about building the perfect app to fund his second venture; having a lot of people who were already ready to help out for free in every way except development, not needing an entrepreneur because he doesn’t want to deal with business guys, building a good product and a great marketing for it, and knowing people who could post about it to a million followers. Instantly, the room started frothing and, in a very polite but knowing tone (which reminded me very much of the /r/entrepreneur subreddit), started telling the guy what he needs to do, what he’s doing wrong, and how he should test his business instead. I too had an immediate reaction that this guy had it all wrong. And that I wouldn’t be able to help him because he became defensive; he wasn’t going to listen to the people who know it all. So I left.

But now I keep thinking about a few things that I really wanted to say. Maybe it would have been just as fluffy as what other people were saying. We were all repeating the mantras you hear on online startup communities. Did any of us really know what it meant? Could anyone else speak from experience? Is it more comfortable to talk within this circle of accepted knowledge, rather than be like the guy to jump into an unknown discipline among an unknown audience of possible hostiles? Maybe his training steeled him for the type of reception he’d get. But he had the balls to come take the feedback, and here’s mine for him, which i wish I were able to give in person.

  1. Guy: “I’m not a developer like you, and I don’t intend to be. I’m looking for developers.” You and everyone else, buddy. Fortunately, you came to the right place. Unfortunately, you talked too long. You didn’t do an elevator pitch, and cut into our socializing time and potential beer time. If you’d gave your pitch and left your business card, at least 3 people would have come up to you afterwards to talk to you, including me. Us developers don’t like to sound agreeable in public. We also don’t like to be kept from our beer. But it’s not that we don’t want to help; we just don’t like to be helpful publicly.
  2. Guy: “I’m not looking to make $1000 once. I’m looking to make $1 1000 times.” This is actually a great thought and I don’t think anyone disagrees with you. But $1 1000 times is a lofty goal and you don’t seem to realize that. And you probably won’t be able to do it. But even if you do, it won’t be enough. Plan to lose money, but gain traction. And if not traction, gain experience so your next one makes money, one dollar at a time.
  3. Guy: “I don’t want to go to an entrepreneur meetup. I don’t need a business guy.” Unless you’re a good business guy, which unfortunately he wasn’t, I think he really need an entrepreneur’s help. And that doesn’t necessarily mean non-tech. He needed someone who was technical and/or business savvy because that person would have given a damn about his project. Talking to this crowd was definitely the right way to meet technical talent, but I think he’d benefit way more from talking to someone who’s run a business before, and knows how to work well with tech guys. He needed saavy, and all he got was sass. (and not SaaS)
  4. Audience: “Have you thought about other marketable products? Tangible products like tshirts?” I actually partially agree with this statement. I think that for a business focused on veteran affairs, it doesn’t have to be a mobile game that hits it big in order to be come a charity. That’s actually an impossible dream that tech entrepreneurs who want to hit it big for not-charity will probably never do. Other ideas I had at this moment were a veteran-oriented school for coding. A coding bootcamp. Instead of making a charity for vets, make jobs for vets. But even something like sports apparel for veterans would be cool. And maybe much more marketable.
  5. Guy: “I’ve played every tower defense game on the app store and google play store.” Yes, but there are a ton of other games that you’d be competing with. The market is saturated with games. The market is also saturated with apps. But then the audience members started describing how different mobile games, mobile apps, and other tech were. That’s when both the guy and I stopped paying attention. It doesn’t help to describe the esoteric details on why mobile gaming and mobile app businesses are different. Tell this guy what he needs to hear: he’s thinking too broadly and too narrowly at the same time. Find the small tech niche and kill it.
  6. Audience: “Do research first and figure out your market fit.” Although I argued for this somewhat in the previous bullet point, I think the timing for this advice is wrong. This guy is no longer in the research phase. He’s not your entrepreneur who doesn’t have any ideas. Actually, I think that the guy would benefit from jumping into the market. His research will be to actually do things and learn as he goes, and learn as he fails. But the audience did have one good point: a tower defense game is probably not going to be the right thing to start with.
  7. Guy: “I know people who are already willing to help out for free. I have a lawyer” – there was a collective rolling of eyes here – “a musician, artists. I have someone who will help build a webpage.” You don’t really have the right team. Really, you don’t have a real team. Listing the lawyer first is often a red flag. That means you’re thinking about the institution, not the technology. Also, the musician you have isn’t going to make the rest of the game work. Nor is the artist. But for a game, yes you need those guys, but later down the road. And I have doubts about the webpage. So let’s just be honest with ourselves and say to this crowd “I need a developer and I don’t have one. I can’t pay much for one. We’re ready to try out the idea and we’re ready to fail. But what we end up doing will be worth more than what we started with because we did it.”

Here’s my final bit of advice. Instead of pitching your app like you know everything about it and are ready to make money, pitch it like it really is; you don’t know anything about it, and you’re reaching out to learn from this community. We will teach you, and learn with you, and we’ll all fail together. But failure is a alternative than not starting. And success is just a matter of trying until you make it.

Facebook Messenger: RenderBot

Facebook announced at their conference, f8, the release of bots for Facebook Messenger. This is a pretty cool although not groundbreaking new feature. Back in the early 2000’s I built an AIM messenger bot with my classmates to respond to queries about where the campus shuttle was. We scraped and parsed an existing shuttle tracking website. That would have been pretty revolutionary if we had 900 million users on the platform, and I think that’s the major advantage of Messenger Bots.

Messenger bots provide a few solutions to a few problems in the current tech environment.

1) App fatigue – people don’t want to download another app for your business, so if you can do all your transactions through bots, it would be seamless. My old client, Fwd Vu, built something like this for their merchant and transaction interface. I’d gladly offer my services to work on a bot for them.

2) Automated customer service – this makes it possible to go a bit beyond your usual FAQ and to pose questions to a service that might be able to hold a conversation with you, without an actual person behind it. There are talks that AI and machine learning can be used to improve the bots’s behavior. Let’s hope it doesn’t end up being like Microsoft’s Tay.

3) Existing platform – the facebook messenger platform is already very wide spread. If people start accepting its use as a business tool, it would be possible to reach a huge audience.

For RenderApps, it seemed a natural way to increase our ability to capture new customers and client interest. It was also a great way to practice deploying more services onto Heroku and to learn a bit more Node.js. Here is a step-by-step guide on how we deployed a Facebook Messenger Bot.

RenderBot is built on Express/node.js, hosted on Heroku, and linked to Facebook’s app and messenger platform. I followed two basic guides that have been published recently: the official Facebook guide, and hashbang. This guide also assumes that you have a heroku account and have heroku toolbelt installed. This also assumes that you have a Facebook developer account.

  1. Make a clone or fork of a basic Facebook bot repo. I used this repo which is basically the final version of the Facebook guide.
  2. Create a new heroku server instance:

    heroku create render-bot

  3. Create a Facebook Page. For the purposes of this tutorial, I’ve created a new Pets page.Screen Shot 2016-04-13 at 4.34.23 PM.png
  4. Create a Facebook App, or use an existing one. To create a new one, go to Facebook Developer Dashboard and create a new web app. Screen Shot 2016-04-13 at 4.26.46 PM.png
  5. Enter in your contact email, and select Apps for Pages as the category. After creating the app, you can skip the rest of the setup, and go directly to the app dashboard.Screen Shot 2016-04-13 at 4.36.51 PM.png
  6. Under the app settings, go to Messenger, and generate a new token. Select the Facebook Page you created, and you’ll be prompted to log in to the app. After accepting, there will be a page token. Click to copy that token to the clipboard.Screen Shot 2016-04-13 at 4.38.27 PM.pngScreen Shot 2016-04-13 at 4.38.38 PM.png
  7. Open index.js from the cloned facebook-messenger-bot repository. You’ll see two token variables that should be replaced by your generated tokens.

    var verify_token = ‘superdupersecrettokenyay’;

    var token = “43dff234lKedYBvMKfiKbonmbwaYUuGmBtmKtoJ8b3YxgPKOJpeNNLLIWxpJAqYSyorgQFQclU59IkYBXzaFASIFYOUREADTHISYOUAREAWESOMEOh4xh36szaxgysl2u7gP1xNexpLiLkOhiE2wZZdqlm9GgyouQAwv2ZPoUoBnJMrpqlqoMdvgoPMbfImbNkxGsISPhbsTzs4ps3d14”;

  8. Copy the Page Access Token generated by Facebook to the second token. This is an access token used to make calls to Facebook’s API through your node.js app.
  9. Generate your own random token using random.org or just put in a pass phrase that you select. This is an access token used by Facebook to connect to your node.js app. I used random.org to generate a bunch of alphanumeric strings and put them together. This ensures that another facebook bot can’t connect to your webhook. Paste this token in “verify_token” in index.js. Go ahead and commit the changes and deploy to heroku. Then, open your web server to get its url.

    git commit -m “added tokens”

    1. git push heroku master

    2. heroku open

  10. In the Facebook App settings page, click on Setup WebhooksScreen Shot 2016-04-13 at 4.46.51 PM.pngand type in the url of your deployed heroku server with /webhook at the end; also put in the generated string you created. Select all the message option checkboxes, and click Verify and Save. If your server has been deployed with the correct keys, there shouldn’t be any errors; otherwise, a red exclamation mark may appear in the callback URL box.Screen Shot 2016-04-13 at 4.46.37 PM.png
  11. A few issues I ran into while trying to configure this for the first time with no guidance:Screen Shot 2016-04-13 at 5.05.29 PM.png
  12. The Callback URL must have /webhook at the end.Screen Shot 2016-04-13 at 5.07.07 PM.png
  13. The URL must be preceded by https://Screen Shot 2016-04-13 at 5.07.42 PM.png
  14. You must enter the correct validation code (the same string you entered in index.js as “token”.Screen Shot 2016-04-13 at 5.11.05 PM.png
  15. In index.js, remove the lines

    app.listen(1337, function () {

    console.log(‘Facebook Messenger echoing bot started on port 1337!’);

    });

  16. and add the lines

    app.set(‘port’, (process.env.PORT || 5000));

    app.listen(app.get(‘port’), function() {

    console.log(‘Node app is running on port’, app.get(‘port’));

    });

  17. This converts your node.js server to a live web server rather than a local server.
  18. Finally, take the Page Access Token generated for your Facebook app, and execute this command in the terminal, with your token substituted:

    curl ik X POST https://graph.facebook.com/v2.6/me/subscribed_apps?access_token=<token>&#8221;

  19. At this point, your messenger bot should be live. If you go to your App homepage (facebook.com/yourapp), and send the page a message via Messenger, you’ll see your message echoed back, and additional activity in your heroku logs. I’ve add a few more bits of functionality in order to do a little more. First, I added a more generic parser function for the user’s message. Replace your code with this code:
    1. app.post(‘/webhook/’, function (req, res) {var messaging_events = req.body.entry[0].messaging;

      for (var i = 0; i < messaging_events.length; i++) {

      var event = req.body.entry[0].messaging[i];
      var sender = event.sender.id;

      if (event.message && event.message.text) {
      var text = event.message.text;

      parseMessage(sender, text)
      }
      }

      res.sendStatus(200);

      });

  20. Next, check for the existence of a particular string in the user’s message. Let’s respond to any message with the word “hi”:
    1. function parseMessage(sender, text) {
      console.log(‘render-bot received message: ‘ + text)
      console.log(‘sender: ‘ + sender)if (message.indexOf(“hi”) != -1) {
      sendTextMessage(sender, ‘Hi and welcome to FatCatBot!’)
      }
      }
  21. Let’s get a bit more personal. Get the user’s name and respond with it. Add this function:
    1. function getUsername(sender, callback) {
      url = ‘https://graph.facebook.com/v2.6/&#8217; + sender + ‘?fields=first_name’
      request({
      url: url,
      qs: {access_token: token},
      method: ‘GET’,
      }, function (error, response) {if (error) {
      console.log(‘Error sending message: ‘, error);
      callback.error(error)
      } else if (response.body.error) {
      console.log(‘Error: ‘, response.body.error);
      callback.error(response.body.error)
      }
      else {
      console.log(“get user profile response: ” + response.body)
      var parsed = JSON.parse(response.body)
      callback.success(parsed)
      }
      });
      }
  22. Then update your parseMessage:
    1. function parseMessage(sender, text) {
      console.log(‘render-bot received message: ‘ + text)
      console.log(‘sender: ‘ + sender)if (message.indexOf(“hi”) != -1) {
      getUsername(sender, {
      success: function(results) {
      var name = results.first_name
      sendTextMessage(sender, ‘Hi ‘ + name + ‘ and welcome to FatCatBot!’)
      },
      error: function(error) {
      sendTextMessage(sender, ‘Hi and welcome to FatCatBot!’)
      }
      })
      }
      }
  23. This is a screenshot of a final product deployed to one of my live apps/pages, RenderApps. Feel free to come say hi to RenderBot.Screen Shot 2016-04-13 at 5.32.42 PM.png

Edit: Ugh, this wordpress site is not great for code formatting. Sorry about how shitty it looks…I’ve kept the text intact instead of in a screenshot so you can copy/paste.

Hosted files with Parse Server

This is just a brief guide of how I handled photos on one of my apps after its backend was migrated over to Parse Server.

Heart FX is a photo app that allows users to add stickers then share the photos on social media. I used the Parse filesystem to store the images because it was pretty easily integrated into the PFObject framework, but when the backend switched over to MongoDB, I saw a few issues that I still don’t quite understand. First, images started to be stored as binary files when viewed through the old Parse dashboard. Second, the previous ACL permissions were no longer being set.

Screen Shot 2016-04-08 at 1.48.29 PM.png

This is my current view of the dashboard through Parse. The images marked as “file” were clickable and would allow you to download and view an image. The “.bin” files lead to an invalid access error, which looked very similar to an AWS access error.

Screen Shot 2016-04-08 at 1.49.38 PM.png

According to Parse documentation, the default file host is GridStore, which is already managed by the MongoDB instance. I could see that now there are additional collections called fs.files and fs.chunks in my MongoDB database, which contain some large files. This must be how Parse Server stores files uploaded as a PFFile as part of any of your PFObjects. Screen Shot 2016-04-08 at 1.51.06 PM.png

And while all the data seems to be there, and can probably be retrieved using the regular Parse PFFile model, the MongoDB dashboard doesn’t really give us a good, quick way to look through them to confirm that photos were uploaded correctly. Digging into the fs.files and fs.chunks databases I identified one particular file and the data that it contained. At the time of this blog post, I don’t know how to manually download or view it, or to change its permissions, if any.Screen Shot 2016-04-08 at 1.56.37 PM.png

So I decided to go back to Amazon AWS, which is a solution I’m familiar with for hosting images. The Parse Platform Github repo has a pretty good guide for setting up AWS. And Amazon does have a free tier for a year so it’s fine to test out the service; if I ever exceed the free tier’s limits, it would mean my users are making me money anyways. (I wish.)

Screen Shot 2016-04-08 at 2.08.41 PM.png

After creating the s3 bucket and successfully saving a PFFile to a new PFObject, I now see the image appear in the s3 dashboard. Best of all, I can click on an actual link and see this file correctly uploaded to the web as a jpeg. Yep, my app is a valentine’s day app and this is the thumbnail I saved while testing. I no longer save the full size image in order to save theoretical space on AWS if I ever have to pay for it.

9b1caa2b665896ae35a70d75f0f818e6_file

On the Parse dashboard, I still don’t see permissions. That’s because with Parse Server, the access control list (ACL) for a PFObject an no longer be set by the client app, which is what my app was doing. However, I wasn’t able to successfully set its ACL in the same way as before through Cloud Code either.

Screen Shot 2016-04-08 at 2.12.19 PM.png

Since the ACL is set by AWS for each file uploaded, this file is still accessible publicly; if I ever needed to do anything with the files, they are accessible through the new access control, so I’m not going to worry about the ACL through Parse for now. For any future app that may actually require ACLs to be set in our own database, I’ll investigate Cloud Code for Parse Server further.

Migration from Parse to Heroku

Since Parse is shutting down, I’ve been looking at alternative mBaaS (mobile backend as a service) options. Parse, purchased by Facebook a few years ago, basically provided the whole backend stack and SDK for mobile development. Once you set up an app, Parse would manage the database and server, and even provided a mobile SDK so that you could code natively (in Objective-C or Swift) without having to worry about any of the rest of the stack. For front end only app developers, Parse made it easy to add database support without having to do any server side code. It was also free – I hosted tens of apps, most of them in development, and never had to worry about cost, even with photo sharing apps.

Parse also provided a CloudCode service which gave me some of the flexibility of having my own server. That was one of the big reasons I stuck with Parse instead of using another service like Kumulos, Kinvey, or Firebase. The ability to add validation, but also to create web endpoints and have side effects, made Parse more than just a mBaaS.  It allowed me to put a toe into the scary world of full stack development, but I still didn’t have to worry about dynos or server deployment.

Well, enough praise for Parse because it’s going. I looked into some other services, primarily Kumulos and Firebase. Firebase is really cool for its socket-based implementation and has a fairly decent mobile SDK. That means you can treat the interface with Firebase as an object rather than using a RESTful API. The problem is that I couldn’t port over my existing apps from Parse, and Firebase doesn’t provide any sort of Cloud Code. Firebase was also much closer to a raw NoSQL database than I was used to – the dashboard representation exposed the raw key-value pairs instead of having a table for each class. But that’s something that I could switch to if needed.

I also looked at Kumulos, which provided a more similar interface to Parse. At the time of writing this article, I was unaware that Kumulos had a KScript feature similar to Parse Cloud Code. But to be honest, the design of the Kumulos website gave me second thoughts about using their service. I’m going to leave the door open for Kumulos, but I decided that it was a good opportunity to dive into Parse Server, the open sourced version of Parse, and to finally understand what sits behind a mobile web server. It’s time to sign up for Heroku and MongoDB and deploy your own server.

Here’s what it takes to port a database over from Parse to your own heroku stack. This wouldn’t have been possible without LearnAppMaking’s tutorial, because every other tutorial or guide, include Parse and Heroku’s own guides, assumed some knowledge that I didn’t have. So here I’ll reproduce it with one of my own apps and assume that you, like me, are a mobile-only developer.

  1. Sign up with mLab. They will provide the database hosting, and have a free tier. Create a new MongoDB deployment. MongoDB is the database type, and mLab provides it as a cloud service. That means somewhere in the internet there are disks (hosted by Amazon, most likely) that store your information. It’s free until you go into production mode. Select AWS as the cloud provider, and use a single node sandbox.Screen Shot 2016-04-05 at 9.25.20 AM.pngAfter the new deployment is done, click on it and you’ll see a warning that a database user is needed. Create a database user. This is an admin user that’s allowed to access the database. You’ll need this for Parse to migrate your data and start using your new database instead of the Parse served database.Screen Shot 2016-04-05 at 9.30.30 AM.png
  2. Migrate your Parse database. There’s an option in the new Parse Dashboard, under your app’s App Settings, to Migrate to External Database. Click on the button, and enter the mongoDB database URI.Screen Shot 2016-04-05 at 9.34.35 AM.pngThe URI can be found in your MongoDB app settings. Substitute the new username/password created for the database, like: mongodb://user1:pass123@dsxxx.mlab.com:xxx/neroh-camera-frameworkScreen Shot 2016-04-05 at 9.36.47 AM.png
  3. After the migration is done and finalized, you’ll see the migration status in Parse, along with the associated URI of your database.Screen Shot 2016-04-05 at 10.03.32 AM.pngAt mLabs, you’ll be able to see your imported data. It should mirror the schema that was in Parse’s database. Now, your Parse database is hosted on your own MongoDB server. However, the migration step done in Parse also allows Parse to connect to your MongoDB database. So your apps are still talking to the Parse-hosted server, and Parse is talking to your database. Any apps out in the wild will still function without blinking an eye.Screen Shot 2016-04-05 at 10.05.10 AM.pngIf you want to take a breather, This is a good place to do it. Because your app is still operational, but now Parse will not deprioritize your traffic after April 28. However, you’ll still need to eventually move off of the Parse hosted server onto the opensource Parse Server, hosted on your own Heroku instance.
  4. First, clone Parse Server from github. I would actually fork it to your local github or bitbucket account. If you are going to have multiple apps, each one will need a server, so I would create a separate repo for each one. For now, clone it.
  5. Sign up for Heroku then download Heroku Toolbelt. The heroku integration will be done on the command line:
    1. Log in to heroku:

      heroku login

    2. Inside the cloned directory for Parse Server (parse-server-example), create a new heroku instance. This is the same as creating a heroku app on the heroku dashboard, then adding a git remote.

      heroku add <new-server-name>

    3. Deploy your server to heroku.

      git push heroku master

    4. Connect your MongoDB database to your Heroku server.

      heroku config:set DATABASE_URI=mongodb://<dbuser>:<dbpassword>@dsxxx.mlab.com:xxx/neroh-camera-framework

  6. On the Heroku dashboard, you should see the Database URI correctly configured under your Apps -> Settings -> Config Variables. Screen Shot 2016-04-05 at 10.27.36 AM.pngStart the heroku server:

    heroku ps:scale web=1

    Then, go to the website to see that the server is actually running. The easy way is to type in the command line:

    heroku open

    You’ll see the basic website created by the parse-server-example repo that prints “Make sure to star the parse-server repo on GitHub!” on the screen. This means your Parse server is accessible.

  7. Any existing apps that used your old Parse hosted server still points to that server. Because Parse has migrated your data over to your MongoDB deployment, any old apps will still work, but any new data will correctly get saved to your new database. Test this by creating some sort of database object (a new user, for example.) On the Parse dashboard, you’ll see this new object; on the MongoDB dashboard, you should see the new user as well.Screen Shot 2016-04-06 at 7.41.34 AM.pngOn the MongoDB, I can perform a search for the same objectId (field name is _id):Screen Shot 2016-04-06 at 7.46.38 AM.png
  8. The last step is to update the mobile app to point to the new Parse Server hosted on Heroku. In the application:didFinishLaunchingWithOptions: method, replace the old Parse configuration line:

    [Parse setApplicationId:PARSE_APP_ID clientKey:PARSE_CLIENT_KEY];

    with the new configuration to connect to your Parse Server:

    ParseClientConfiguration *config = [ParseClientConfiguration configurationWithBlock:^(id<ParseMutableClientConfiguration>  _Nonnull configuration) {

            configuration.clientKey = PARSE_CLIENT_KEY;

            configuration.applicationId = PARSE_APP_ID;

            configuration.server = PARSE_SERVER_URL;

        }];

    [Parse initializeWithConfiguration:config];

  9. PARSE_CLIENT_KEY and PARSE_APP_ID are the client key and app id found under your Parse dashboard -> App Settings -> Security and Keys. PARSE_SERVER_URL is the url of your Heroku url + /parse. This is the url that gets opened by typing heroku open in the console. Be sure to add /parse to the end of the resulting url.
  10. You’ll also have to now update Parse Server’s configuration in index.js to talk to your MongoDB account. In the parse-server-example directory, edit index.js, and add values for appId, masterKey, serverURL, and clientKey. databaseURI has already been set by the Heroku environment. masterKey can be added here because it is deployed to your server on Heroku and is secure, but if you save the codebase to github or bitbucket, make sure it is private.Screen Shot 2016-04-08 at 10.44.22 AM.pngAfter adding the correct values, deploy your server:

    git push heroku master

  11. Check that your deploy worked by simply running the app. Any data changes should show up on the database in mLab. Your Parse dashboard will still sync to it, so that any existing users on old apps will still be able to see new content. I’m not sure how long this lasts…perhaps until Parse permanently shuts down. But this flexibility is really good because it prevents any of your old users that haven’t updated their apps to think that nothing has changed, and that your awesome app is still running just as it always has. But now, we know we’re free from Parse’s shutdown. A caveat: there are some incompatibilities between all the functionality you may have had on Parse and Parse server: https://github.com/ParsePlatform/parse-server/wiki/Compatibility-with-Hosted-Parse. Most are still being addressed and the document gets updated pretty often.
  12. Once your app has been tested to correctly update data to your Heroku-hosted server, you can submit a new version to the app store. From here on out, you’ll be hosting your own data on mLab and Heroku.

Tutorial: Profile scroll view

Almost every app that has a user profile has a screen that allows a user to enter multiple fields on a small screen. This is probably used for editing user fields, like in a user profile view. One problem that often comes up is that while a set of user information fields might show up decently in a large phone, developers often forget that the fields will be off screen or get blocked by the keyboard in a smaller phone like the iPhone 4S. Here’s a quick and correct way to use autolayout to create a scrollable profile editor.

Using autolayout makes it so that you don’t have to worry about new screen sizes that Apple releases in the future. It also allows the iPhone 4S screens to display all the information and makes sure that on an iPhone 6S or an iPad, the screen doesn’t scroll unnecessarily.

The challenge of using a scrollview with autolayout is knowing how to make sure the scrollview’s size and its content size are automatically calculated. The bounds of the scrollview itself are different from its content size, and often we are left with those annoying yellow or red warnings that the layout is missing constraints or have invalid constraints. In this tutorial, we’ll make sure the scrollview is correctly sized by both its container and its content.

The full project source can be found at the RenderApps tutorial repo: https://bitbucket.org/renderapps/profilescrollview

1. Create a new single view project. In the main storyboard, let’s make the main ViewController a freeform controller until we can finish designing the actual content of the profile view.

profile1.png

The scrollview should be constraint to the sides of the superview. Add constraints for the top, left, bottom, and right offset. For the content, it’s best to create a view inside the scrollview that essentially acts as a content container. This UIView should also have constraints to the left, right, top and bottom of its superview (the scrollview) but at this point, those constraints are not enough, and you’ll see some complaints from Interface Builder.

profile1.5.png

Let’s add some content. First, add a logo in the top center, and add some basic constraints to that. The next line should contain a UILabel and a UITextField. Assuming that these are your basic content fields that will be repeated, go ahead and constrain those to each other. profile2.png

If we go ahead and add vertical center alignment between the two controls, any copies made later will have that constraint. profile3.png

For simplicity, I’ve copied the same fields. Constraints can be added to these controls en masse. Make sure they are well constrained: all the labels should be constrained to the left; all the textfields should be constrainted to the right, and all the textfields should have a top constraint to the previous textfield. Since the original textfield and label had a vertical center constraint, these should be enough. For a more customized profile view, there may be UITextFields, controls, toggles, imageviews, buttons, etc. If the internal elements are all well constrained, the red warnings for these controls should disappear.

profile4.png

Also, since we’ve completely filled out our content, let’s make sure the scrollview’s content view has the right bounds. Make the content UIView, the scrollview, and the controller’s main view just small enough to contain the last set of fields.

profile5.png

In this example, the main screen has gone from a height of 800 (our empty canvas) to a height of 625. The last set of fields should also have a bottom constraint to the bottom of the content view. profile6.png

At this time, the scrollview is still not fully specified and you still get a few complaints. However, they are not missing constraints now, but just misplaced constraints.

profile7.png

This is because Interface builder still does not know exactly what size the content view should be. So all the items inside the content view are fully constrained, but the content view has no idea how large it should be. If you run the app, you’ll see what’s wrong:

profile8.png

For a profile scrollview, we want the height to be flexible, but we want the width to conform to a screen. This is easy since we have a content view. Just add a constraint for equal widths between the content view and the controller’s view.

profile9.png

Now the width of the text fields is correctly constrained by the width of the device.

profile10.png

This is all we need to do in autolayout in order to make sure all the constraints work for iPhone 4S and iPhone 6. What happens when you start typing in a field? The keyboard will still pop up, and the bottom most fields will get covered. We want to be able to scroll the fields into visibility. Because of the autolayout, we can be sure that the content view is correctly sized; its width is constrained by the main view, and its height is constrained by the fields. So we can simply mess with the size of the scrollview. Add a reference to its bottom layout constraint:

    @IBOutlet weak var constraintBottomOffset: NSLayoutConstraint!

    @IBOutlet weak var scrollView: UIScrollView!

And connect the bottom constraint of the scrollview and its superview to this layout constraint.

profile11.png

In viewDidLoad, add some keyboard notification listeners:

        NSNotificationCenter.defaultCenter().addObserver(self, selector: “keyboardDidShow:”, name: UIKeyboardDidShowNotification, object: nil)

NSNotificationCenter.defaultCenter().addObserver(self, selector: “keyboardWillHide:”, name: UIKeyboardWillHideNotification, object: nil)

These notifications should alter the bottom offset of the scrollview.

    // MARK: – keyboard notifications

func keyboardDidShow(n: NSNotification) {

let size = n.userInfo![UIKeyboardFrameBeginUserInfoKey]?.CGRectValue.size

self.constraintBottomOffset.constant = 23 + size!.height – diff

// force scrollview constraint to be updated before next step

self.scrollView.superview?.setNeedsUpdateConstraints()

self.scrollView.superview?.layoutIfNeeded()

}

 

func keyboardWillHide(n: NSNotification) {

self.constraintBottomOffset.constant = 23 // by default, from iboutlet settings

self.view.layoutIfNeeded()

}

Now, when a textfield near the bottom is clicked, the scrollview automatically resizes so that the bottom of the scrollview is above the keyboard so all the fields are visible. It also automatically makes the text field currently being edited visible by scrolling to it.

profile12.png

So with this final addition, the profile editor will not have any fields that are not visible if the user clicks on a field near the bottom of the screen. On larger screens, that problem might not be an issue from the very start.

RenderApps: Lessons about process

A while back I wrote about the ideal team for a startup: the idea of having the triad of Hacker, Hustler, and Designer. I still believe very strongly in this ideal and it’s funny how times have changed and my team has as well. Now, with RenderApps, we have a top notch hustler and hacker, but we are lacking on the design side. Fortunately, that’s ok because I also have connections to great design, and it’s just a matter of securing their services with the right combination of timing and price. With an in-house hustler, we can try as hard as we want to get projects, and with the hacker at the base, our strongest suite is still our development.

However, although the make up of the team remains pretty consistent to what I felt like I needed about a year ago, the process has vastly changed. With a couple of projects under my belt that felt like they were scrambles to the end, I think I’ve come to understand how not to get a MVP out. No matter how irresistible it is to want to jump in and start coding an MVP, it’s important to really understand your customers. And as a contractor, your customer is your paying client. And I promise you, you don’t understand each other.

There are a few features that we commonly build and that are easy to build because of experience. For example: login, user profiles, photo feeds, maps. So when I’m asked to build a social app that has a user login, it’s very tempting to just start putting together a signup/login page that leads to a user profile page, that leads to an activity feed. Pause…guess what? That look and feel is not what the client wanted. And even if this is an MVP and a proof of concept for their new, groundbreaking social mobile app, all that time you spent building the core functionality has been wasted.

I promise you that you will be refactoring the login page. Chances are the client has decided they want Facebook login, or Twitter login, or maybe login via a phone code. (This leads to a different discussion that the specifics of each feature must be as concrete as possible…for a future blog post.)

I promise you that you will have to revisit the activity feed. You thought you were displaying one kind of cell for the feed? Chances are, the content that’s important for the front page is going to change. Or even just the look and feel. That MVP you wanted to put out to the client to show that Hey, they can see real data! just isn’t good enough because instead of real data, they want pretty data. And pretty is not what you had in mind because a tableview cell is a cell.

And I promise you that you’ll have to redesign almost every other feature that you personally thought you had down as a developer. But what if for the first 1/3 of your precious time with the client (and their money) you took off your developer’s hat and put on a designer’s hat?

Let’s say we operate on a 10 step process with 1 being the first time the client’s idea is being put to paper, and step 10 being the day the client hands you the check and you remove yourself from their github repo. This would be roughly what my recent processes have been. And I’m fairly certain that this is a terrible process and we’re trying to vastly improve on it. (The following lists are colorcoded: Client, Developer, Design)

  • Step 1: Client discussion on look and feel, inspirations, and requirements
  • Step 2-4: Create the XCode project. Build!
  • Step 5: Distribute the awesome prototype to the client. Lukewarm response
  • Step 6-7: Revisions.
  • Step 8: Worried discussions about direction of the app.
  • Step 9: Frenzied revisions.
  • Step 10: Frenzied revisions and discussions.
  • Step 11: Discussions about what we can do to get it into the app store.

As you can see, we didn’t finish in 10 steps. I believe that the process should be:

  • Step 1-3: Client discussions on look and feel, inspirations and requirements.
  • Step 4: Client signoff on fully completed, high fidelity mockups and flows.
  • Step 5-7: Create the XCode project. Build the thing. Iterate internally with Fabric, then distribute through Testflight Beta.
  • Step 8: Client feedback and minor revision requirements
  • Step 9: Minor revisions
  • Step 10: App store

Yes, this is highly idealistic. Maybe it’s naive. But it’s even more naive to think that Steps 2-4 in the original process would save any time or produce any useful output in the long run. As a developer, I’m happy to say that I’d like to defer my efforts to the latter half of the process because that’s where it’ll matter. And for the first 4 steps, I will try as hard as I can to understand what the client wants and does not want. But I will also try as hard as I can to come to an agreement on what will be done and what will not be done.

So what I need is a ton of time from my project manager and my designer. We’ll spend the bulk of the first half of the process in understanding what the client wants. After that, the client is out of the picture until near the end of the app. So is the designer, who will be done with their work the moment the client signs off on it. (This is the naive and optimistic goal.) My role as developer will also be minor near the end: the bulk will happen after I’ve had a clear idea of what has been designed. And finally, we will hit our goal with relatively few revisions and mind-changing. And even if step 9 of the revisions happens to be significant, at this point it wouldn’t be exasperating for the developer because there have not been any radical changes. No code has been thrown away.

The process is now Hustler, Designer, Hacker.