Time to say Goodbye!

November has been a month of huge learning for me. I worked on a digital humanities project called the Hoccleve Archive, which is a digital edition of Thomas Hoccleve’s early 15th century poem, the Regiment of Princes. Initially, there was an issue with incorrect display of the line numbers in the poem. I came up with a solution to display line numbers accurately. The next task for us was to create a search functionality for a selected word which would search the poem/lexicon for that word and return the results with word, line number and parts of speech.

I took the responsibility of managing the team and getting the tasks done on time. I was involved primarily in coding the logic for the search functionality.  I successfully divided up the search functionality into two parts, the GUI and back end logic and we assigned the work amongst us accordingly.  Zane Blalock, a fellow SIF, worked on setting up the context menu that is shown in the below screenshot.

context-menu

 

I worked on the JavaScript file by loading the JSON file that has all the poems merged into a single file. I was able to successfully loop through the complex data structure of the poem node and look for the word and parts of speech. Then I worked with Madison Hanberry, a fellow SIF, on returning the line number, sentence and poem title. Please find below screenshots for the output that is being displayed.

a) Search within the Poem

poem

b) Search within the Lexicon

lexicon

 

 

 

 

 

 

After both the logic and UI were ready, Zane worked on integrating it and we sat down together to resolve the issues that crept up and together we made the search functionality to work. Please find the below screenshot that displays the results on the UI.

a) Search within the Lexicon

final-search-reults_poem

b) Search within the Poem

final-poem

The search functionality for Hoccleve Archive has been wrapped up successfully.  We have set it up on the amazon web services.

Here is the link to the Hoccleve Archive: http://hocclevedev.s3-website-us-east-1.amazonaws.com/.  You need to select a word and then right click to see the context menu for search functionality.

Thank you Dylan and Robin for all the support! Zane and Madison, you guys have been amazing! It was pleasure working with you all!

I am graduating this December. So, my journey as a SIF has come to an end. I hope to use the experience I gained in all my future endeavors. I will miss working at SIF. Thanks to Justin, Brennan and Joe for giving me this wonderful opportunity.

 

October Update!

October has been quite an exciting month for me. I got to work on various projects listed below.

  1. Glass Plate Negative Photo Gallery: The task was to display the glass plate negative photographs from the 1920’s (Remember, we have done photo-stitching to merge Atlanta historic photos last semester ?!). Now, I worked with Zoomify product that makes high-quality images available on the Web. The Zoomify products work in two steps:
  • Zoomify Convertor: Converts images of any size or quality to stream for fast initial display and on-demand viewing of fine details! The converted images could be either Zoomify folders or ZIF files (Zoomify Image Format). ZIF files are useful for hosted sites that limit support for uploading folders or large numbers of files.
  • Zoomify Image Viewer: Enables publishing of multi-megabyte or even multi-gigabyte photos that can be viewed without any large download. Users can interactively zoom in/out and pan left/right and explore huge images of truly high quality.

To publish the Zoomify converted image to a website, we need to copy four things to the webserver:

  • The Viewer file ZoomifyImageViewer.js (provided with the product).
  • Converted Zoomify Image folder (the entire folder, not just its contents).
  • The Assets folder from the product download (and its subfolders).
  • Your web page which displays the images in a preferred format.

I designed the webpage to have two image viewers where one is a slideshow of the photographs from glass plate negative collection and the other viewer displays the map showing the location of those             photographs. You can have a glimpse of the webpage in the below screenshot.

glassplate-negative

 

  1. Unpacking Manuel’s Project: I have been guiding the newbies on using Blender and have assigned them frames for Unpacking Manuel’s project. I believe in a few days, we can have all our frames on the wall that fellow SIF, Jack has modeled so brilliantly. I have modeled few frames. Please find below a screenshot of the rendered frames and I am still working on texturing them.

wall1

 

  1. Hoccleve Archive: We have made progress with the search functionality of the Hoccleve Archive. Earlier, I ran into some issues because of the huge JSON file sizes. After simplifying the JSON files, there was another issue with traversing the JSON as it has a complicated data structure. However, we have figured out that the ideal way would be to filter the objects and then using js map method to return specific format of  filtered objects . I believe in the next few weeks, the search functionality will be ready to roll and I am very excited about it!

 

I am looking forward to share progress on the current projects and more in my next blog!

 

 

My journey continues..

Last semester, as a Student Innovation Fellow, the journey was quite exciting and fun because I got to gain practical knowledge rather than the theoretical learning. I worked on 3D Atlanta, Glass plate negative and VR game which was huge learning experiences for me. The most challenging task during last semester was to figure out the best way to stitch photos and I believe Adobe Photoshop is the best suited software as it allows us to manually stitch photos. Some of the automated software like Hugin didn’t work because of the quality of available photos. I am glad that I made a YouTube tutorial on it as the video would help others ease into the process of stitching photos.

I had a fantastic summer as I worked as a Software Development Intern at Vertafore. It was an awesome experience for me because this was my first exposure to corporate life in the USA.  The work culture was quite different from the country I came from and I thoroughly enjoyed it. My mentor and colleagues were extremely helpful. I worked on front end testing and some of my responsibilities were:

  • Front End Development – adding features into ImageRight product using AngularJS, HTML5, CSS3, JavaScript, AJAX, JSON
  • Front end test development – Selenium WebDriver and Protractor using a mock backend.
  • Collaboration with the Product Manager to outline behaviors for new features
  • Mock object creation using JSON
  • Testing framework development
  • Page object creation using dynamic id as selectors
  • Unit test development in JavaScript using Jasmine
  • API test development for the new end points created on server

My summer ended on a high note as I learnt new technologies like selenium, protractor, node.js, JSON. I am excited to be back as a Student Innovation fellow as I will get to apply the knowledge I have gained at Vertafore and learn more from the projects i’ll be working on in this semester.

I am currently working on a digital humanities project called the Hoccleve Archive, which is a digital edition of Thomas Hoccleve’s early 15th century poem, the Regiment of Princes. It’s been two weeks in the project and I came up with a solution to display line numbers accurately. The next task for us is creating a search functionality for a selected word which would search the poem/lexicon for that word and return the results with word, line number and parts of speech.

I am also working on 3D Atlanta where I will model the Olympia building as shown in the below image. This is the present day Walgreen’s building located near the Five Points Marta.

gtyr

 

Additionally, I am guiding the newbies on getting started with Blender by providing them with tutorials and once they get comfortable, we would start creating basic frames for the Unpacking Manuel’s project and eventually into creating buildings for 3D Atlanta.

As always I am excited to be a part of interesting projects this semester and I hope to learn, explore and share knowledge around new technologies and innovations. I look forward to share more updates in my next Blog !

Designing a Virtual Reality Sound Game

This is my third endeavor with SIF after the 3D Atlanta and photo stitching projects. I am part of the team that was asked to design a game in the Unity game engine. While designing the game is an exciting prospect in itself, the other inspiring fact is that this game is being designed for visually challenged  people.

The part of the game I am designing has 4 buildings namely Arch, Temple, Palace, Basilica in an open area. There is a Ray probe which is stationary and rotates along Z-axis (Y-axis in Unity). It has a square pointer attached to it. When the game starts, a random building starts making noise. The player has to move the Ray probe to find the building. When the Ray probe is pointing at the target building, the noise gets louder and when it moves away from the target, the noise level drops. This would help the player to identify the building. This is the passive scene of the game and I was asked to design this part. Furthermore the game has 12 rounds. If a player hits the target, then the score is incremented by 1 and the player moves onto the next round. If they miss the target, the player will have another turn in the same round. The game moves on to the next round irrespective of the player hitting or missing the target.

The buildings are modelled in Unity and they are simple blocks. All the specific requirements of the game needs to be scripted in C#. I have made use of Raycast functionality available in Unity. Raycast needs a special mention here as the concept of game is designed around it. The Raycast property would help us identify if the player hits the Target collider. If the Raycast hits the collider, it would return true and this would help us identify if the player has hit the target or not.

There are quite a few challenges in the game design but Unity is a powerful tool which would give you a wide range of options and I am currently exploring those. I would post further details in my next blog.

Recreating Atlanta of the 1920’s !

Last month has been exciting for me as I was exploring and learning new things like working on Photoshop and I also ended up making my first video tutorial.

Glass Plate Negative Geotagging – Photo Stitching Project:

Below is an image of the map with all the geotagged images of Atlanta in 1928. I was assigned to stitch all photos  along the northwest side of Pryor St SE between Alabama St SW and Hunter Street NW (now know as MLK Dr).

1

Initially, I tried a few softwares to photo stitch the images which didn’t work as expected because of the low quality of the images. Then, I switched to Adobe Photoshop and manually merged the photos. The results were better. This is how the final photo stitched image looks like:

final

I made a tutorial on how to photo stitch images manually using Adobe Photoshop. Please find below the link to my tutorial:

Workshop @Doctor Unity

Last week I got a chance to attend the workshop of James Simpson, fondly known as Doctor Unity. He is known for his extensive work in the field of Unity, developing Unity tools, games and even a full MMO (massively multiplayer online game). As you can see my experience in 3D modelling is limited to Blender, so I want to add Unity to that list. This has been the main motivation for me to attend the workshop, and I have thoroughly enjoyed sitting through it.

Firstly, he explained us a few things to familiarize us with the tool and then moved on to creating a few simple structures. Later we were asked to model a Plinko board game. It was exciting to model the board and animate the Plinko game where the puck/plinko disc moves. It was all done as a part of the workshop. It was nice to learn good concepts, tips from a person who has vast experience in the Unity tool.

It was interesting to apply the laws of physics and dynamics to the game. After creating the board and the puck, when we eventually drop the puck, it doesn’t do a thing. It simply falls on the board and lies dead in gravity. In order for it to look real, we apply physics dynamics such as bounce, skid etc. So we applied bounce dynamics to this puck and then tested it. It worked fine and seems close to reality. Some other concepts that he thought about were similar to blender where we have to add a light source or a parent-child relation etc. We could add color, texture to objects and animating in Unity seems simpler when compared to blender.

Apart from the workshop, we discussed various things related to gaming and how he along with two other people developed a game in 2 weeks. It was encouraging to know that there are many other tools which have been developed and are being developed to enhance 3d modelling.

Final Touches !!

I completed creating the basic structure of ‘The Artistic Barber Shop’ and I have already shared my views about creating that structure in my previous blog. I was left with the texturing part. I would like to discuss in detail about the texturing I have used in this particular instance.

To start with, Blender generally has a built-in function that helps create our very own textures like glass, wood etc. Apart from this there is an attribute in blender called UV mapping. UV mapping is an essential characteristic in Blender and is used to map the UV coordinates of an image (synonymous to x, y coordinates). To be more precise, we can point the image to the exact location onto which it has to be textured. Below example will help to analyze UV mapping comprehensively.

A)  1                B)  2               C)  3

 

The image in (A) represents a square plane mesh I’ve created in order to put on the vintage coca-cola poster. The image in (B) is the one which I’ve chosen to texture. So, using UV mapping, I was able to assign the image to the coordinates that I wished for. You could see from figure (C), the UV mapped image. I have followed the same approach  to map all the images to the created structure and the final output is as below.

4

Another important aspect to discuss here is baking the textures and packing the data into the .blend file. UV mapping involves mapping the image from the source. To map a building of this magnitude involved mapping a dozen or more images. This will be fine when the image should be rendered/used in the same computer as it is created but if these files have to be exported onto some other system then we need to handle them properly. To avoid the cumbersome work involving changing the source path of the images in case these were moved to another system, we will bake our textures. This will help us merge all the textures into a single image. Therefore, even if we move the .blend file from one system to another, all we need to do is to change the source path of one baked image.

Another important aspect is packing texture models. This will help us to include all the related files (including the UV image) and others in .blend files itself. Therefore, when we move this packed .blend file, we can import the image files from the .blend file and use it as a source to map to the UV textures.

This is my first assignment and it has been a good experience to me. I’ve learned many things along the way. I am looking forward to my next assignment. Till then. Adios!!

Update on my building!!

As stated earlier I was assigned to work on a building – a barber shop in particular. I have completed modelling the physical structure from the available images. Material Settings and Texturing are the other major parts of modelling a building. The physical component here, in blender, is creating a basic structure that resembles the building. Material Settings involve assigning a material to each and every component of the physical structure. For example, consider any basic building structure, it consists of walls, doors and windows. It is necessary for us to classify each distinct component into distinct materials. So we add a material to represent a wall, and one to represent a door and so on. This will indeed help us during texturing a structure. Texturing basically is to provide a texture to the material. In general terms, we assign a material named wood to a physical component name door. There are different kinds of wood like Oak, Maple Mahogany, etc. to choose from and Texturing helps us to assign the desired texture to that door. So, if we want a door made out of Oak wood, we apply any Oak texture image to that door in order to make it look like one.

Below is the physical structure of the building:

blog2

 

All the images dated back then are monochromes, so we are trying to pick the best suited textures for each and every component. We’ve been discussing this during our weekly meetings as it is pretty difficult to make out textures from a grayscale image.

I am also, now, a part of 3D Scanning project which is set to begin this Thursday. I am quite excited to work on it as well and hopefully I will be able to share progress on it in the coming weeks.

Exciting Beginning..

This is my first semester here in SIF program and I am currently assigned to the 3D Atlanta Project. The 3D Atlanta project is led by Brennan. Wasfi Momen, Megan Nicole Smith, Rumman Shakib Ahmed are my teammates. The project focuses on recreating Atlanta city block especially Decatur Street at Central Avenue where the Classroom South is located now. The goal of our project is to show the historical view of the space as it looked in the 1920’s to 1930’s.

Some of the scenes were already created much before my inclusion. Their work is very impressive and it inspires me to work harder. The task of research team is to collect any kind of data that could help us in recreating the past city. The data collected by our research team consists of a survey map explaining the city sketch and some images giving us an insight into the city’s architecture.

I actually enjoy working on my project because part of it deals with going through the images and locating the buildings that are being recreated, on a survey map. What does that feel like? Almost like finding the lost city of Atlantis, but better, we recreate them. I know I went a little overboard but the project is really exciting. I have experience using a 3D modelling program – blender; but my experience is limited to the coursework and the course project. The 3D Atlanta project gives me an opportunity to explore a whole new dimension and I hope to continue working on it.

I am currently working on Building 57 which is a Barber Shop. Below is the image of the building I am going to recreate.

J_S_Youngs_Artistic_Barbershop_57_Central_Avenue_SW_Atlanta_Georgia_circa_1927

I promise to update you more on my work as time goes. Currently my aim is to match the level of work done and help finish the tasks.

About Me !

 

priyanka

I am a current graduate student of computer sciences at GSU. Hailing from India, I have an undergraduate degree in computer sciences and have also worked as a Systems Engineer with Tata Consultancy Services, India, for 2.7 years. Then, I decided to expand my horizons and pursue further studies in the US. So, I joined GSU in spring 2015 and this is my second semester here.

This semester is going to be a lot of learning for me as I am exploring new subjects like Data Visualization, Machine Learning and Advanced Operating Systems.

I am also excited as my journey as a SIF starts this semester. As a student innovation fellow, I will be working on the 3D Atlanta project. Apart from systems engineering, graphics, and information systems, I love adventure and travel and plan to tour the world one day.