Friday, 18 March 2016

Summary of March 14th - Demonstrations of my application

On March 14th, I record some videos of my demonstration of my Apple APP.
Video 1: Demonstration of my APP when distance is 5 meters away

Video 2: Demonstration of my APP when distance is 10 meters away

Video 3: Demonstration of my APP when distance is 15 meters away

Video 3: Demonstration of my APP when distance is 20 meters away


Video 3: Demonstration of my APP when distance is 25 meters away


Video 3: Demonstration of my APP when distance is 28 meters away







Summary of March 9th – Add icon to my application

This link instructed me to add an icon to my application. 


I designed an icon myself and one of my friends helped me to figure it out using PS:
Figure 1: Icon of my application


Figure 2: Adding icons to Xcode library

Figure 3: Icon files

Summary of March 8th- Add function of display distance and modify the positation

This is the last Tuesday before bench. I had a short meeting with my supervisor. I gave a quick demo to my supervisor of how my App works. As well I showed my poster to my supervisor and he said that there are some blanks on the poster and it will be ugly. I decided to make more modifications. Also, my teacher advised me that it would be good to display the distance on the screen.
Display the distance:

Figure 1: Display distance


I add a line of code in the ViewController and add a text box in the View.
@interface ViewController (){

    AVAudioPlayer *alarm;
    __weak IBOutlet UILabel *displaydistance;

}


Figure 2: Modified edition of interface

Comments:
Here, Chris helped again. For more information, click the link below.

Summary of March 3rd – Xcode give alarm

Logbook noting:
Firstly, I downloaded three sample alarm musics in to computer and drop one of them into Xcode as my audio alarm.


Figure 1: Download alarm music from internet

Then I wrote the code shown below into Xcode:


 
Figure 2: Codes of setting music to be alarm voice



These lines of codes add an interface into my program and it prepares an mp3 file for this program. After this, the codes below will give alarm when necessary:

Figure 3: Give alarm if necessary
Comments:
The instruction of Chris online helped me a lot in the step.

Summary of March 2nd – Get pixel width of red blocks

Logbook noting:
Figure 1: Part of the codes of calculating pixel width

Figure 2: Part of the results of pixel width of a line


Figure 3: Part of results of pixel width in a line


Comments:
If you need the full codes, please email me and have a discussion. 

Summary of March 2nd – Problem: some of the red paper can’t be detected

Logbook noting:

3.2
There appear a problem when I keep on going to get pixel width of red blocks. Sometimes the App tells me there is no red colour in a picture even if I take a picture of a red paper. Or only a few red points are detected of a red paper. Just like this:
2016-03-01 22:33:42.090 watchahead[9093:3083668] There are 40 red colour points.
2016-03-01 22:33:51.591 watchahead[9093:3083668] Snapshotting a view that has not been rendered results in an empty snapshot. Ensure your view has been rendered at least once before snapshotting or snapshot after screen updates.
2016-03-01 22:33:51.598 watchahead[9093:3083668] Snapshotting a view that has not been rendered results in an empty snapshot. Ensure your view has been rendered at least once before snapshotting or snapshot after screen updates.
2016-03-01 22:33:58.669 watchahead[9093:3083668] the position of this red point is ( 5, 4)
2016-03-01 22:33:58.670 watchahead[9093:3083668] the position of this red point is ( 5, 5)
2016-03-01 22:33:58.671 watchahead[9093:3083668] the position of this red point is ( 5, 6)
2016-03-01 22:33:58.677 watchahead[9093:3083668] 
(
        63,
        76,
        44,
        255
    ),
        (
        61,
        62,
        55,
        255
  ),


The result of detecting red points in this picture is 3 which is definitely wrong. I tried several times and get similar answer. But if I change the angle of taking photo, the results go well. So I think it might be the reflection of white light who made those mistakes.


Summary of March 1st- Get RGB value of a picture

3.1
After I was able to take photos, I worked on getting RGB value of a picture.
I add these codes into the existed file.
This long block of codes is a function of Xcode. By restricting sampling point on each line and row, I can get certain number of pixel points’ RGB value. In my initial imagination, I can get total pixel points’ RGB value which is 1334*750. However in real test, even 250*250 will take a long time, so that it is difficult to process 1334*750 points in a short time. If the time is too long, this project will lose its value.


  float rowdimention = 250;
    float linedimention = 250;



Figure 1: Codes on get RGB value of a picture



Figure 2: Display an instruction if necessary 




After running these codes, it can print out RGB value of every single point of the picture! At the same time, it is able to count total red points in a picture. This is quite help to the next step, obtain the pixel width of red blocks in a picture.

Comments:
At that that time, I set the resolution to be 250 *250. In the later experiment, it is proved that this resolution is too low and I change it to be 625*425. At the beginning, I can’t get RGB of any points. I just tried to modify the codes to make it work. 

Summary of February 29th – Codes of open camera

2.29
During my learning of OC online lectures, I tried to develop my own application. Firstly, I tried to write codes in order to opening the camera and take photo. Finally, on 29th of February, I create the code of taking photos.


 

I took several photos but they could not be saved into library.
Logbook noting:
During the period from February 1st and February 17th, I learned Objective-C serially. Firstly I learned Chinese version and then I learned English version: Stanford University Developing iOS7. These online lectures especially the latter one helped me a lot on OC programming.



Comments:
After December 11th, Christmas Holiday began. Following that was the final exams of the first semester. So I almost didn’t have any time to progress my project. Just after the exams, I began to learn Objective-C sequentially. The reason I learned it serially was that I found myself have not sense of the basic programming skills. One piece of paper can’t provide me a whole idea of this project. I have some basic C and C++ knowledge background but it is still very difficult for me to develop a new application. Under this consideration, I decided to learn it from the beginning. Here I want to recommend some fabulous online courses.
1.       Stanford University Developing iOS 7 Apps
This series of video provides the most comment and basic ideas about Apple app development. The teacher in the video uses a lot of demos to show the students how to use the skills in lecture onto a real app.
Through these videos, I learned how every app runs, the usage of protocols, different frameworks library, the usage of button and label. These functions helped me very much in the following steps.


2.     电脑教程手机程序 iOS发快速入门教程 讲老师:李明
This series of videos are specially prepared for Chinese developers. I learned a log from these videos. As a fresh man in Object-C, this is absolutely for me to get in touch with Objective-C.


Summary of December 10th – Get pixel value of a picture

Logbook noting:

How to get pixel data from a UIImage (Cocoa Touch) or CGImage (Core Graphics)?

+ (NSArray*)getRGBAsFromImage:(UIImage*)image atX:(int)x andY:(int)y count:(int)count
{
    NSMutableArray *result = [NSMutableArray arrayWithCapacity:count]; 
    // First get the image into your data buffer
    CGImageRef imageRef = [image CGImage];
    NSUInteger width = CGImageGetWidth(imageRef);
    NSUInteger height = CGImageGetHeight(imageRef);    
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    unsigned char *rawData = (unsigned char*) calloc(height * width * 4, sizeof(unsigned char));
    NSUInteger bytesPerPixel = 4;
    NSUInteger bytesPerRow = bytesPerPixel * width;
    NSUInteger bitsPerComponent = 8;
    CGContextRef context = CGBitmapContextCreate(rawData, width, height,
                    bitsPerComponent, bytesPerRow, colorSpace,
                    kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big);
    CGColorSpaceRelease(colorSpace);
    CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
    CGContextRelease(context); 
    // Now your rawData contains the image data in the RGBA8888 pixel format.
    NSUInteger byteIndex = (bytesPerRow * y) + x * bytesPerPixel;
    for (int i = 0 ; i < count ; ++i)
    {
        CGFloat alpha = ((CGFloat) rawData[byteIndex + 3] ) / 255.0f;
        CGFloat red   = ((CGFloat) rawData[byteIndex]     ) / alpha;
        CGFloat green = ((CGFloat) rawData[byteIndex + 1] ) / alpha;
        CGFloat blue  = ((CGFloat) rawData[byteIndex + 2] ) / alpha;
        byteIndex += bytesPerPixel;
        UIColor *acolor = [UIColor colorWithRed:red green:green blue:blue alpha:alpha];
        [result addObject:acolor];
    }
  free(rawData); 
  return result;
}
What is CGImageRef


CGImageRef CGImageCreate ( size_t width, size_t height, size_t bitsPerComponent, size_t bitsPerPixel,size_t bytesPerRow, CGColorSpaceRef space, CGBitmapInfo bitmapInfo, CGDataProviderRef provider, constCGFloat *decode, bool shouldInterpolate, CGColorRenderingIntent intent );


How to use CGContextDrawImage?

I need some help using the CGContextDrawImage. I have the following code which will create a Bitmap context and convert the pixel data to CGImageRef. Now I need to display that image using CGContextDrawImage. I'm not very clear on how I'm supposed to use that. The following is my code:


- (void)drawBufferWidth:(int)width height:(int)height pixels:(unsigned char*)pixels
    {
     const int area = width *height;
     const int componentsPerPixel = 4;
     unsigned char pixelData[area * componentsPerPixel];
     for(int i = 0; i<area; i++)
     {
          const int offset = i * componentsPerPixel;
          pixelData[offset] = pixels[0];
          pixelData[offset+1] = pixels[1];
          pixelData[offset+2] = pixels[2];
          pixelData[offset+3] = pixels[3];
     }
     const size_t BitsPerComponent = 8;
     const size_t BytesPerRow=((BitsPerComponent * width) / 8) * componentsPerPixel;
     CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
     CGContextRef gtx = CGBitmapContextCreate(pixelData, width, height, BitsPerComponent, BytesPerRow, colorSpace, kCGImageAlphaPremultipliedLast);
     CGImageRef myimage = CGBitmapContextCreateImage(gtx);
     //What code should go here to display the image?
     CGContextRelease(gtx);
     CGImageRelease(myimage);
}

Comments:
These two blocks of codes are all from http://stackoverflow.com/. If you are trying to get RGB value of a picture, the second block of codes is very helpful.


The basic idea is to set an array which contains 4 numbers representing Red, Green, Blue and Alpha.  And then display a huge matrix containing thousands of arrays.