Concat is the most phenomenal app to turn your memorable photos into amazing videos. It is considered as the best app in this category with many great features and easy to use interface design.

Do you want us to build an App for you ?

Over the last few decades, there has been a repetitive noise about the field of UX and its impact around the globe. The profession of UX is newborn if we compare it with other existing fields of Computer Science and modern-day technology. However, it is not only related to the area of Information Technology and Computer Science, but it involves other various fields as well. Knowing the fact that how huge UX is, makes it more difficult and intense to understand what will be your future if you are new into the profession of UX Design or if you are making your mind to jump into this field.

UX offers more roles and opportunities than people think in terms of a long career; you just have to choose what suits you, your skills and what will be your goal in the future because different UX Specialists deal with different challenges and roles. These are some following roles which UX offers:

  1. UX Manager
  2. Team Leader
  3. UX Project Lead
  4. UX Principal

Each one of these plays an essential role in any organization, which I mentioned earlier, you have to find your interest and skills accordingly. In this blog, I will try to explain the outline of these roles, so let’s begin.

UX Manager

As the name suggests, this role is not about putting your head in each and every tiny detail of the project. It is more of a managerial position. As a UX Manager, you will be responsible for reviewing and evaluating deliverables from the design and research team. Creating UX Design strategies for organizations and clients is also part of this field. As a UX Manager, you can also expand and develop the UX team for the organization.

Team Leader

The Team Leader role is all about leading the team. As a Team Leader, you will look after managing, mentoring, and synchronizing the team members. A Team Leader needs to lead from the front, and that requires skills like resource management, project management, time management, and, most importantly, a good mentor.

UX Project Lead

UX Project Lead is the one who takes responsibility for controlling and managing User Experience of one project at a time. As a UX Project Lead, you will look after completion of the project. Not only this, responsibilities like interacting with clients and stakeholders also fall in the bucket of UX Project Lead. Creating a vision for a product and its development is also a significant factor in which UX Project Lead contributes.

UX Principal

The possibility of having a UX Principal in an organization is infrequent, but if there is one, then he holds the value above all other UX team members. Working in detail and coming up with new ideas belong to this guy. UX Principal works with clients and stakeholders because identifying the problem and coming up with the best solution is what he loves doing. UX Principal is similar to UX Project Lead in some ways, but due to the roles which come under his belt makes him more valuable than others. UX Principal is also known as UX Strategist because he defines the UX strategy for an organization, product, website, or an app.

Conclusion

As mentioned earlier, the explanation and definition of these roles can give an idea to UX Designers who want to grow their careers in this field. It depends upon the interest, liking, strengths, and the skills which will allow them to take the path to be a UX Leader in the future.

Do you want us to build an App for you ?

According to the Maslow’s Hierarchy of Needs, after our physical and social needs are fulfilled, we feel the need to be proud of something, something that we achieved or accomplished. This need can be fulfilled by creating something of value to others. And value can be defined in many different ways. One of its definition can be extracted from aesthetics; the sense of beauty, symmetry, harmony. If a person can create something that others find beautiful, he/she just made something of value. He/she will be praised by others, fulfilling his/her need of pride.

In software development, aesthetics play an important role. User experience, along with soothing colors and balanced animations, can make a software more natural, more interactive and friendlier than its competitors, driving more users to it and ultimately increasing revenue. The case of Android is not different than this.

Getting Started

Working on animations has always been a challenging task in Android. There are multiple options available to work with, classes like Animation, AnimationUtils, AnimationSet, R.anim.fade_in to name a few, and is difficult for a learner to choose from. And there are many things to understand in order to make sense of what is going on. For instance, look the code below:

val textView = findViewById<TextView>(R.id.textView)
val animation = TranslateAnimation(0.0F, 100.0F, 0.0F, 100.0F)
animation.duration = 1000
textView.startAnimation(animation)

At first, it looks very simple. We’re making an animation object that is describing how the animation will proceed, and we’re setting that animation in a text view. The text view is supposed to animate from point (0,0) to point (100,100) in one second. But if you were to execute this code, after animation is completed, the text view will jump back to its original position. The reason behind this is that in Android, translating an item doesn’t change the actual position of that item. So after animation is ended, item returns back to where it was.

Motion Layout

Point being, animating views in Android is not easy. Or should I say was not easy. Because Android has come up with something that has made animating views effortless. Motion Layout, introduced at Google I/O event in 2019, is an extension of Constraint Layout so it comes bundled inside with Constraint Layout 2.0. With motion layouts, extreme animations can be made using only XML. No Java or Kotlin code is required. One enthusiast made this using motion layout:

But don’t worry, I will keep things simple to better understand how motion layouts work and how can we make complicated animations using just the basic tools. In this article, I will only cover the basics of motion layout and how to make simple animation using it.

First thing that you need to understand is that MotionLayout extends from ConstraintLayout. In order to use MotionLayout, you must implement following dependency in your gradle file:

implementation ‘androidx.constraintlayout:constraintlayout:2.0.0-beta4’

If you don’t know what ConstraintLayout is or how it works, I suggest you learn it first. If you already are familiar with it, then the following layout code will make sense:

<androidx.constraintlayout.widget.ConstraintLayout
	xmlns:android="http://schemas.android.com/apk/res/android"
	xmlns:app="http://schemas.android.com/apk/res-auto"
	android:layout_width="match_parent"
	android:layout_height="match_parent">
	
	<TextView
		android:id="@+id/textView1"
		android:layout_width="300dp"
		android:layout_height="300dp"
		android:background="#3F51B5"
		android:gravity="center_vertical"
		android:padding="10dp"
		android:text="@string/textView1Str"
		android:textColor="@android:color/white"
		app:layout_constraintBottom_toBottomOf="parent"
		app:layout_constraintEnd_toEndOf="parent"
		app:layout_constraintStart_toStartOf="parent"
		app:layout_constraintTop_toTopOf="parent" />
	
</androidx.constraintlayout.widget.ConstraintLayout>

This is our activity_main.xml file and it will create a text view and will keep it in center of the screen like this:

Let’s make the text view a little pretty:

<TextView
	android:id="@+id/textView1"
	android:layout_width="300dp"
	android:layout_height="300dp"
	android:background="#3F51B5"
	android:gravity="center_vertical"
	android:padding="10dp"
	android:text="@string/textView1Str"
	android:textColor="@android:color/white"
	app:layout_constraintBottom_toBottomOf="parent"
	app:layout_constraintEnd_toEndOf="parent"
	app:layout_constraintStart_toStartOf="parent"
	app:layout_constraintTop_toTopOf="parent" />

We’ve added padding, set a background and text color to it, now it looks much better:

As we’ve established earlier, that MotionLayout is extension of ConstraintLayout, replacing ConstraintLayout with MotionLayout should work, courtesy of polymorphism. So let’s try that:

<androidx.constraintlayout.motion.widget.MotionLayout
	xmlns:android="http://schemas.android.com/apk/res/android"
	xmlns:app="http://schemas.android.com/apk/res-auto"
	android:layout_width="match_parent"
	android:layout_height="match_parent">
	
	<TextView
		android:id="@+id/textView1"
		android:layout_width="300dp"
		android:layout_height="300dp"
		android:background="#3F51B5"
		android:gravity="center_vertical"
		android:padding="10dp"
		android:text="@string/textView1Str"
		android:textColor="@android:color/white"
		app:layout_constraintBottom_toBottomOf="parent"
		app:layout_constraintEnd_toEndOf="parent"
		app:layout_constraintStart_toStartOf="parent"
		app:layout_constraintTop_toTopOf="parent" />
	
</androidx.constraintlayout.motion.widget.MotionLayout>

But if you run the application now, it will give an error. Not because polymorphism failed but because MotionLayout has one prerequisite. Every MotionLayout must have a MotionScene file associated with it. MotionScene file is an XML file and it contains the animations’ code. Create a folder in res directory of your Android project and create an XML file inside it. For our demo, we will be creating a folder by the name of xml and MotionScene file by the name of scene_1.xml.

Now that we’ve created our file, here is how we associate it with a MotionLayout:

<androidx.constraintlayout.motion.widget.MotionLayout
	xmlns:android="http://schemas.android.com/apk/res/android"
	xmlns:app="http://schemas.android.com/apk/res-auto"
	android:layout_width="match_parent"
	android:layout_height="match_parent"
	app:layoutDescription="@xml/scene_1">
	
	<TextView
		android:id="@+id/textView1"
		android:layout_width="300dp"
		android:layout_height="300dp"
		android:background="#3F51B5"
		android:gravity="center_vertical"
		android:padding="10dp"
		android:text="@string/textView1Str"
		android:textColor="@android:color/white"
		app:layout_constraintBottom_toBottomOf="parent"
		app:layout_constraintEnd_toEndOf="parent"
		app:layout_constraintStart_toStartOf="parent"
		app:layout_constraintTop_toTopOf="parent" />
	
</androidx.constraintlayout.motion.widget.MotionLayout>

By mentioning scene_1 in app:layoutDescription=”@xml/scene_1″ in MotionLayout tag, we’re specifying this MotionLayout a MotionScene. Now our motion layout is all ready for animations. Let’s dive into scene_1.xml file and write some animations.

MotionScene

MotionScene is where your animation code lives. In XML terms, MotionScene is a tag, with its children Transition, ConstraintSet and StateSet. We will be focusing on Transition and ConstraintSet because these two are responsible for animations. MotionScene can contain multiple Transition tags and ConstraintSet tags.

ConstraintSet holds the information of the resting state of view. By resting state, I mean position, rotation angle, alpha etc., the building blocks of animations, at a time when either animation is not yet started or is finished. Each ConstraintSet expresses the resting state of the views mentioned inside it. Following code describes how ConstraintSet holds that information:

<ConstraintSet android:id="@+id/end">

    <Constraint
        android:id="@+id/textView1"
        android:layout_width="300dp"
        android:layout_height="300dp"
        android:rotation="-90"
        android:translationX="-280dp"
        motion:layout_constraintBottom_toBottomOf="parent"
        motion:layout_constraintStart_toStartOf="parent"
        motion:layout_constraintTop_toTopOf="parent" />

</ConstraintSet>

The id with ConstraintSet gives it a name, something we can use to call it later on. How this id will be used is coming up shortly.

Now a question arises that what about the state of views mentioned in the layout file, the file actually hosting these views. Its answer would be that, if there is a view inside MotionLayout whose name is also mentioned in MotionScene inside ConstraintSet then its state information in layout file will be overridden with the state information in MotionScene file. For example, if width and height of textView1 is set to 200 in activity_main.xml but they are set to 300 in scene_1.xml then they will be of 300 width and height. The information inside scene_1.xml overrides the information in activity_main.xml.

Now, to explain that id, I’ll first explain the other child of MotionScene tag, Transition.  Transition holds the information of start and end state of a view. Basically, it holds two ConstraintSets in it. Following code will help you understand:

<Transition
  	motion:constraintSetEnd="@+id/end"
	motion:constraintSetStart="@+id/start">
</Transition>

Whereas end and start are two different ConstraintSets. These two ConstraintSets are following:

start:

<ConstraintSet android:id="@+id/start">

    <Constraint
        android:id="@+id/textView1"
        android:layout_width="300dp"
        android:layout_height="300dp"
        android:rotation="0"
        motion:layout_constraintBottom_toBottomOf="parent"
        motion:layout_constraintEnd_toEndOf="parent"
        motion:layout_constraintStart_toStartOf="parent"
        motion:layout_constraintTop_toTopOf="parent" />

</ConstraintSet>

end:

<ConstraintSet android:id="@+id/end">

    <Constraint
        android:id="@+id/textView1"
        android:layout_width="300dp"
        android:layout_height="300dp"
        android:rotation="-90"
        android:translationX="-280dp"
        motion:layout_constraintBottom_toBottomOf="parent"
        motion:layout_constraintStart_toStartOf="parent"
        motion:layout_constraintTop_toTopOf="parent" />

</ConstraintSet>

Now we know why id is used in ConstraintSet. Now we have two resting states of textView1 and we have a Transition tag explaining which resting state is initial state and which resting state is final one. Now we need a trigger, an action upon which this animation should start. Luckily, that trigger can be mentioned inside Transition tag. There can be two trigger types, OnClick and OnSwipe. And if I have to explain them too, then maybe you are reading a wrong article. Following code explains how they are mentioned inside Transition tag.

a. OnClick:

<Transition
    motion:constraintSetEnd="@+id/end"
    motion:constraintSetStart="@+id/start">

    <OnClick
        motion:clickAction="toggle"
        motion:touchRegionId="@id/textView1" />

</Transition>

OnClick has two attributes, clickAction and touchRegionId. touchRegionId is the id of view to be clicked and clickAction has the information of what to do if that view is clicked. You can see that we’ve mentioned toggle inside clickAction, because we want our animation to toggle from start to end and from end to start if this view is clicked.

b. OnSwipe:

<Transition
    motion:constraintSetEnd="@+id/end"
    motion:constraintSetStart="@+id/start">

    <OnSwipe
        motion:dragDirection="dragLeft"
        motion:touchRegionId="@id/textView1" />

</Transition>

OnSwipe also has two attributes, dragDirection and touchRegionId. touchRegionId is the same and dragDirection has the information of how this view should be dragged or swiped for animation to start. In dragDirection, we’ve set dragLeft, because we want our animation to start when user starts to drag textView1 to left. For purpose of this article, we’ll be using OnSwipe instead of OnClick.

Now our code is ready to be executed. We’ve created a text view in MotionLayout, we’ve set its initial and final state using ConstraintSet and using Transition, we’ve set how animation will start. If we execute our code right now, you’ll see this:

The beauty of motion layout is that you only have to set the initial resting state and final resting state and the rest is handled by motion layout itself. Like here, we didn’t have to calculate the angle of rotation at a given time during animation; we only said that in its final state, it should be rotated -90. Furthermore, what you will notice is that the reverse animation, the animation from final state to initial state, is automatically created by motion layout.

Conclusion

Now this is only the most basic demonstration of the power of motion layout. With motion layout, one can create complex animations without ever-invoking Java or Kotlin. In next article, we will go through some other tools available in motion layout like:

  • CustomAttribute
  • KeyFrameSet
  • KeyPosition and KeyAttribute

And create better, more complex animations. I encourage you to experiment with this demonstration. Its code is available on GitHub with branch name Article_1.

Do you want us to build an App for you ?

VPN technology is getting popular all over the world due to its characteristic of provide privacy and counter restrictions on access of applications and websites. The requirement of VPN varies on circumstances around user such as Government policies.

IKEV2 protocol is most secure and fast protocol among other protocols. In this blog we tell you how to develop Android VPN app with IKEV2 protocol. But Android does not provide build-in support for IKEV2 protocol so we will use StrongSwan (the OpenSource IPsec-based VPN Solution) libraries for this purpose.

Getting Started

Scope of this blog is to configure the StrongSwan and integrates in AndroidApp. There are three major parts of this app.

  • StrongSwan libraries  (libstrongswan, libcharon etc.)
  • Application in Java (Android)
  • Library to glue these two parts

The Java part and the libraries communicate by means of the Java Native Interface (JNI).

To achieve this there are three major steps need to implement.

  1. Configure StrongSwan
  2. Integrate StrongSwan in Android App
  3. Java code to use connect VPN using StrongSwan

1. Configure StrongSwan:

I am working on windows platform. For configuring StrongSwan there are some shell commands, as windows cmd does not support shell commands for this I have used CENTOS virtual machine. Download VMWare or Vitual box to host your virtual machine on windows and then open .vmx file

In CENTOS you need the following tools:

  • a recent GNU C compiler (>= 3.x)
  • automake
  • Autoconf
  • Libtool
  • pkg-config
  • gettext
  • perl
  • Python
  • lex/flex
  • yacc/bison
  • gperf

Now follow the steps to configure StrongSwan

a. Clone StrongSwan

Clone StrongSwan using command:

Git clone https://git.strongswan.org/strongswan.git

After a successful check out, give the autotools a try

b. Go to StrongSwan directory

First go to the Strongswan directory that you have cloned by the following command.

cd strongswan/

c. Create source files

Then run these commands one by one after each command done successfully:

•	./autogen.sh
•	./configure
•	Make
•	Make install

This creates several pre-build source files. Next go to JNI directory by running the following command:

cd src/frontends/android/app/src/main/jni

And run this command

Git clone https://git.strongswan.org/android-ndk-boringssl.git -b ndk-staticopenssl

Now copy the code from CENTOS to window and run the app in android studio the code for the App can be found in the source: strongswan/src/frontends/android directory of our repository. To build it the Android SDK and NDK are required.

2. Integrate StrongSwan in Android App:

Now we integrate StrongSwan libraries in Android app. Here we use sample android app given by StrongSwan as front-end app. For this purpose we need .so files for native classes to communicate with Java classes. Download the Strong project from Github and copy JniLibs folder from this Github project and past it in your project that have copied from CENTOS in the following path:

strongswan/src/frontends/android /app/src/main

Now build the project, if there is NDK path problem try to replace this

task buildNative(type: Exec) {
    workingDir 'src/main/jni'
commandLine "${android.ndkDirectory}/ndk-build", '-j', Runtime.runtime.availableProcessors()
}

with this

task buildNative(type: Exec) {
    workingDir 'src/main/jni'
commandLine "${android.ndkDirectory}\ndk-build.cmd", '-j', Runtime.runtime.availableProcessors()
}

and sync now.

3. Java code to use connect VPN using StrongSwan:

To connect with VPN using StrongSwan in this app you need to replace some piece of code as below:

In file path

strongswan\src\frontends\android\app\src\main\java\org\strongswan\android\logic/CharonVpnService.java

You will see the code

SettingsWriter writer = new SettingsWriter();
writer.setValue("global.language", Locale.getDefault().getLanguage());
writer.setValue("global.mtu", mCurrentProfile.getMTU());
writer.setValue("global.nat_keepalive", mCurrentProfile.getNATKeepAlive());
writer.setValue("global.rsa_pss", (mCurrentProfile.getFlags() & VpnProfile.FLAGS_RSA_PSS) != 0);
writer.setValue("global.crl", (mCurrentProfile.getFlags() & VpnProfile.FLAGS_DISABLE_CRL) == 0);
writer.setValue("global.ocsp", (mCurrentProfile.getFlags() & VpnProfile.FLAGS_DISABLE_OCSP) == 0);
writer.setValue("connection.type", mCurrentProfile.getVpnType().getIdentifier());
writer.setValue("connection.server", mCurrentProfile.getGateway());
writer.setValue("connection.port", mCurrentProfile.getPort());
writer.setValue("connection.username", mCurrentProfile.getUsername());
writer.setValue("connection.password", mCurrentProfile.getPassword());
writer.setValue("connection.local_id", mCurrentProfile.getLocalId());
writer.setValue("connection.remote_id", mCurrentProfile.getRemoteId());
writer.setValue("connection.certreq", (mCurrentProfile.getFlags() & VpnProfile.FLAGS_SUPPRESS_CERT_REQS) == 0);
writer.setValue("connection.strict_revocation", (mCurrentProfile.getFlags() & VpnProfile.FLAGS_STRICT_REVOCATION) != 0);
writer.setValue("connection.ike_proposal", mCurrentProfile.getIkeProposal());

Replace it with

initiate(mCurrentProfile.getVpnType().getIdentifier(),
mCurrentProfile.getGateway(), mCurrentProfile.getUsername(),
mCurrentProfile.getPassword());

Now it should work

Add StrongSwan as a Module in Android App:

If u want to use strongswan in your app, add android folder from this path strongswan\src\frontends\android in your app as a module and use this project in your app.

Got to File->New->import module

Select android folder from the strongswan project directory

It will give error that the app module is already exist so change the module name from “app” to “strongswan” you can write what u want. And click finish.

Right click on app and click open module settings

Select Dependencies tab from side menu, click on “+”and select module dependency

Select strongswan and click ok.

Now you can see strongswan module is added

Conclusion:

The basic purpose of this blog is to summarize the strongswan(the OpenSource IPsec-based VPN Solution) configuration and intergration in android project to build up the VPN app using IKEV2 protocol.

Do you want us to build an App for you ?

Page Speed of web page is a measurement or a value of how fast the content on our web page loads. It is the speed in which web pages are downloaded and displayed on the user’s web browser. Page speed can be described in either “page load time” (the time it takes to fully display the content on a specific page) or “time to first byte” (how long it takes for your browser to receive the first byte of information from the web server).

Page Speed depends on following things:

  1. Server Response Time (TTFB)
  2. Poor Web Hosting
  3. Use a CDN (Content distribution network)
  4. Java Script ,CSS resources
  5. Asynchronous of  files
  6. Enable compression
  7. Use Optimized Images
  8. Media Files
  9. Remove unnecessary Plug-in and Script
  10. Leverage browser caching
  11. Reduce JavaScript execution time

1. Server Response Time (TTFB)

It is the time that passes between a client requesting a page in a browser and a server responding to that request. The optimal server response time is under 200ms.

Lack of caching is also the cause in the reduction of Server Response Time because every time browser get files from server rather than from cache. 

To improve your server response time, look for performance bottlenecks like slow database queries, slow routing, or a lack of adequate memory and fix them.

I think don’t waste time on to optimize Server response time because that is related to server side which is not in our control directly. When you fix other issue server response time also decrease with that dramatically.

2. Poor Web Hosting

Poor web hosting is a factor that reduce the Server response time. Most new site owners choose the cheapest possible option for hosting. While this is often enough in the beginning, you’ll likely need to upgrade once you start getting more traffic. We have three options for hosting i.e.

  • Shared hosting
  • VPS hosting (Dedicated server)

Shared hosting is the cheapest option and you can often get it for about five dollars per month. With shared hosting, you share certain resources like CPU, disk space, and RAM with other sites hosted on the same server which is not good for google page speed.

With VPS (Virtual Private Server) hosting, you still share a server with other sites, but you have your own dedicated portions of the server’s resources. This is a good in-between option. It protects your site from everyone else on your server without the cost required for dedicated hosting which will increase the page speed because no other traffic comes here.

That is a reason I suggest you to using VPS hosting instead of shared server or poor web hosting because that is main cause in the reduction of Server Response Time. Shared server contains more than one websites which increase the traffic.

3. Use a CDN (Content Distribution Network)

CDN stands for Content distribution network. It is also known as content delivery network.

A content delivery network (CDN) is a system of distributed servers (network) that deliver pages and other web content to a user, based on the geographic locations of the user, the origin of the webpage and the content delivery server.

Always use CDNs instead of download files of JS or CSS because CDNs are placed closest to the user, it’s possible to reduce latency when the distance that your content needs to travel is shorter. A CDN can make your website load much faster.

4. Java Script And CSS Resources

Remove all the unnecessary files either they are CSS or JS or any other (font). And make sure that removal of files don’t affect on page design and functionality.

  • Remove all the unnecessary comments and code that is unused by page.
  • Minimize all the CSS and JS files. It will consume less space and increase page time because it will reduce the size of file. You can dramatically increase your page speed.
  • Remove comments, formatting and unused code from files.
  • If page use more than one JS file then merge all JS file in one. It will decrease JS payload.
  • Avoid using libraries which have their own CSS and scripts because 80% of CSS and script content are not necessary for our work (useless). These libraries take too much time to load.

5. Asynchronous Of Files

A browser can load files synchronously as well as asynchronously. When files are synchronous loading, the browser will load one file at a time.

Many times, we use more than one file for styling and script (libraries script) and many in cases files are bulkier than many other page elements and browsers typically take longer to load them.

Load all the Java Script files asynchronously make sure that it don’t affect other functionality in page (animation or anything else). Here below File script1.js load synchronously and script2 and script3 are loaded asynchronously. Asynchronous of files is simply done by

<script src="../script1.js"></script>
<script src="../script2.js" async></script>
<script src="../script3.js" async></script>

Don’t loads interlink files asynchronous some time smaller file use larger one. And it gives error because other file not completely loaded. The script.js file contains jquery code but the size of script.js as compared to jquery.min.js is less and it load completely and jquery is still loading which will give error.

<script src="../jquery.min.js" async></script>
<script src="../script.js" async></script>

We can use rel=”preload” to load CSS file asynchronously. Some browser can’t support preload functionality.

<link rel="stylesheet" href"../style1.css" >
<link rel="preload" href"../style2.css" as="style">
<link rel="preload" href"../style3.css" as="style">

Here style1.css is loaded as it is where as style2.css and style3.css preload.

6. Enable Compression (GZIP)

When a browser visits a web server it checks to see if the server has compression enabled and requests the webpage. If it’s enabled it receives the compression file which is significantly smaller and if it isn’t, it still receives the page, only the uncompressed version which is much larger. Here is the code to enable compression on following web servers.

  • On NGINX web servers
# Load gzip prefrences
gzip on;
gzip_proxied any;
gzip_types application/javascript application/rss+xml application/vnd.ms-fontobject application/x-font application/x-font-opentype application/x-font-otf application/x-font-truetype application/x-font-ttf application/x-javascript application/xhtml+xml application/xml font/opentype font/otf font/ttf image/svg+xml image/x-icon text/css text/javascript text/plain text/xml;
location ~* \.(css|webp|js|ttf|otf|svg)$ {
		expires 365d;
}
  • On Apache web servers
AddOutputFilterByType DEFLATE text/plain
AddOutputFilterByType DEFLATE text/html
AddOutputFilterByType DEFLATE text/xml
AddOutputFilterByType DEFLATE text/css
AddOutputFilterByType DEFLATE application/xml
AddOutputFilterByType DEFLATE application/xhtml+xml
AddOutputFilterByType DEFLATE application/rss+xml
AddOutputFilterByType DEFLATE application/javascript
AddOutputFilterByType DEFLATE application/x-javascript
  • Via .htaccess
<ifModule mod_gzip.c>
mod_gzip_on Yes
mod_gzip_dechunk Yes
mod_gzip_item_include file .(html?|txt|css|js|php|pl)$
mod_gzip_item_include handler ^cgi-script$
mod_gzip_item_include mime ^text/.*
mod_gzip_item_include mime ^application/x-javascript.*
mod_gzip_item_exclude mime ^image/.*
mod_gzip_item_exclude rspheader ^Content-Encoding:.*gzip.*
</ifModule>

7. Use Optimized Images

Use properly sizing images which is a simple way to reduce its loading time. Some time our images are bigger in size which takes too much time. The best way is resized that images by own rather than to use CSS. Alternate way is use “srcset” and “size” attribute for <img> tag i.e.

<img srcset = "template-880w.jpg 880w , template-480w.jpg 480w , template-320w.jpg 320w" sizes = "(max-width:320px) 280px , (max-width:480px) 440px,800px" src = "template-880w.jpg" >

8. Media Files

Media files especially images are also play an important role in speed of web page speed. Media files such as images can be a real drag on your site’s performance. In many ecommerce site large images are used which become a main cause of page speed.

  • Don’t GZIP the image for compression because they aren’t compressed the same way as text files.
  • Use smaller images because larger images take more time to load.  
  • Convert all the images into “.webp” format and Use .webp format as img srcset because .webp load faster as compared to other image formats. But there is one drawback of using  .webp that is safari do not support it therefore use like that
<picture>
	<source srcset="../insta.webp" type="image/webp" data-aos="fade-up">
	<img src="../insta.png"/>
</picture>
<script>
/*	check webp support	*/
function supportsWebp()
{
    if (!self.createImageBitmap) return false;
    const webpData = 'data:image/webp;base64,UklGRh4AAABXRUJQVlA4TBEAAAAvAAAAAAfQ//73v/+BiOh/AAA=';
    const blob = fetch(webpData).then(r => r.blob());
    return createImageBitmap(blob).then(() => true, () => false);
}

if(supportsWebp()) 
{
    var root = document.getElementsByTagName( 'html' )[0];
    root.className += ' webp';
}
else 
{
    var root = document.getElementsByTagName( 'html' )[0];
    root.className += ' no-webp';
}
</script>
  • Preload the images make sure that other content can’t disturb with that.
<link rel="preload" href="../image.jpg" as="image" >
<link rel="preload" href="../style.css" as="style" >
<link rel="preload" href="../script.js" as="script">
<link rel="preload" href="../font.woff" as="font"  >
  • In this technique, files are preload and when they are used then content is defined.
  • For Icon use custom icon (own images) rather than to use third party library (font awesome). It consume less time as compared to third party libraries

9. Remove Unnecessary Plug-in And Script

We’ve to remove the unnecessary script (unused) and avoid using third party script to resolve that issue. You should do following to reduce the impact of third-party code.

All such plug-in or scripts negatively affect the website speed. Remove unnecessary libraries because they have their own scripts and styling which takes too much time.

  • Defer the loading of JavaScript
  • Use link tags with preconnect attributes

10. Leverage Browser Caching

Caching allows your web server to send a web page at a much faster pace to a browser after it has already been delivered once. To leverage your browser’s caching generally means that you can specify how long web browsers should keep files stored locally. That way the user’s browser will download less data while navigating through your pages, which will improve the loading speed of your website. To enable it, add those lines to your .htaccess file

## EXPIRES CACHING ##
<IfModule mod_expires.c>
ExpiresActive On
ExpiresByType image/jpg "access 1 year"
ExpiresByType image/jpeg "access 1 year"
ExpiresByType image/gif "access 1 year"
ExpiresByType image/png "access 1 year"
ExpiresByType text/css "access 1 month"
ExpiresByType application/pdf "access 1 month"
ExpiresByType application/javascript "access 1 month"
ExpiresByType application/x-javascript "access 1 month"
ExpiresByType application/x-shockwave-flash "access 1 month"
ExpiresByType image/x-icon "access 1 year"
ExpiresDefault "access 2 days"
</IfModule>
## EXPIRES CACHING ##

11. Reduce JavaScript Execution Time

You won’t have any control over what those external scripts do. Short of not including them, about the only thing you can do is defer their loading. This allows the page to continue to load and execute while the script is loaded and executed later. This method doesn’t work with all scripts, but it will work with most.

<script defer src= ="https://example.com/script.js"></script>

Do you want us to build an App for you ?

You might have noticed that in applications switcher mode, all running applications screens are displayed. They are actually screenshots of applications that are used for animation purposes by OS.

When and why are these screenshots taken?

When the home key on an iPhone or iPad is pressed, a screenshot is immediately taken of the current application. This is done to generate an animation of the application which appears to “shrink” into the screen. The image is also stored for use as a thumbnail image for the running application. 

Problem due to these screenshots?

If sensitive information was displayed in your application at the time of the screenshot, serious security implications may arise. Personal information may unknowingly be leaked and used for or unwanted purpose.

For the proof of this concept. Run different applications on your iOS device. Press home to enable application’s switcher mode. You will be able to see applications’ screens with a clear display of their content in it. Attached is a screenshot of my phone with different games and apps running.

Can we protect application from it?

The answer to this question is YES. You can protect your application from getting a screenshot of your sensitive data. We will show you a simple way to do it.

Getting Started

To build a sample for Protect app. You need to follow bellow steps.

  1. Xcode project creation
  2. Addition of some content on main screen
  3. Addition for protection view for app, when it switch to background state
  4. Removal of protection view, with app switch back to active state

1. XCode Project Creation

Open up XCode and create a single view new project “ProtectApp”. User Storyboard to design application UI. Select “Swift” from application language option.

2. Addition of some content on main screen

Add some content on view controller. So that you can easily identify application in switcher mode and also can distinguish it between your protection view and main app.

3. Addition for protection view for app, when it switch to background state

While application is about to switch from active state to background or inactive state, add a custom view as an overlay to your application full screen. So,when OS tries to take a screenshot of it, it will display your custom view in application switcher mode.

Let’s check it out how to do that.

Add below methods in your AppDelegate class.

//to add protection to your app content
funcprotectAppContentFromScreenshot ()
    {
        // fill screen with our own colour
        let securityView = UIView.init(frame: self.window!.frame)
        securityView.backgroundColor = UIColor.red
        securityView.tag = 22
        securityView.alpha = 0;
        self.window?.addSubview(securityView)
        self.window?.bringSubviewToFront(securityView)
        
        // fade in the view
        UIView.animate(withDuration: 0.2) {
            //
            securityView.alpha = 1
        }
    }

Now call protectAppContentFromScreenshot() from applicationWillResignActive Method of AppDelegate class.

4. Removal of protection view, with app switch back to active state

When application switch back to active state from background or inactive one, you will need to remove added protection view from application view.

For this you will need to add removeProtectionFromApp method from applicationDidBecomeActiveMethod of AppDelegate class.

//remove protection from your app content
funcremoveProtectionFromApp ()
    {
        let securityView = self.window?.viewWithTag(22)
        UIView.animate(withDuration: 0.5, animations: {
            //
            securityView?.alpha = 0
        }) { (completed) in
            //
            securityView?.removeFromSuperview()
        }
    }

Now, take a look at the below image. This is how it will look like after implementation of all above code.

For source code, you can visit this link: https://github.com/whizpool/ProtectedApp

Do you want us to build an App for you ?

After a long wait and anticipation, at last iOS 13 supports native Dark Mode. Users would be able to choose to allow a system wide dark appearance that will be supported in all official apps. As we will see, Apple has also made it simple for developers to add dark mode support with minimum effort.

iOS 13 Dark Mode support changes:

  1. Status bar style : default, darkcontent , lightcontent
  2. Activity indicator : medium, large, Depreciated (gray, white, whitelarge)
  3. UILabel, UITextField, UITextView : Use Semantic Colors or Custom Colors for light and dark mode
  4. AttributedString : requires providing foregroundColor
  5. For Embedded web content : Declare supported color schemes with color-scheme, Use prefers-color-scheme media query for custom colors and image
  6. Images : Dark mode images
  7. Images Tint Color : Dark mode tint color

Let’s make a start!

If you have already done this, then it’s great and now we will discuss what you can do more to make interface better. So let’s start on “How to implement Dark Mode”.

Step 1: Colors

At the end, actually our app is to throw colors and if we are getting colors right, then we are almost ready to launch our app in dark mode.

System Colors (Dynamic)

Before iOS 13, UI Color was offering only few simple colors like black, red, white and yellow. Now, due to iOS 13 we don’t need to use these colors because these colors are static which means they can’t adopt tint changes and remain the same as they were before.

Some colors are dynamic i.e. (systemRed) and they can adopt lighter colors in dark mode and darker colors in light mode rather than remaining same as static. In iOS 13+, it’s better to use the new system color which will respect the user’s color scheme preference:

label.textColor = UIColor.label

Compatibility:

What if, instead of if #available, there was a way to abstract the color choice down one level, so we could do something like this?

label.textColor = ColorCompatibility.label

Once we cover those, we can use their red/green/blue/alpha components to create the implementation of Color Compatibility that we want:

Enum ColorCompatibility 
{
	Static var label: UIColor 
	{
		If #available(iOS 13, *)
		{
			return .label
		}
		return UIColor(red: 1.0, green: 1.0, blue: 1.0, alpha: 1.0)
	}

	static var secondaryLabel: UIColor 
	{
		if #available(iOS 13, *) 
		{
			return .secondaryLabel
		}
		return UIColor(red: 0.9215686274509803, green: 0.9215686274509803, blue: 0.9607843137254902, alpha: 0.6)
	}

	// ... 34 more definitions: full code in the link at the bottom
}

We can then use Color Compatibility to set any colors we need.

Custom Colors (Dynamic): The Assets Catalog

In custom colors, chances of errors are more for you and design team. Apple team has already worked on dynamic colors for our ease. In Xcode 11, we can also define variant with color set.

If we want to design our own custom color, for that, first we have to go into Assets Catalog and open the attribute inspector, and set its appearance from None to Any, Dark as shown in the below figure.

Programmatically:

In iOS 13, a new UIColor initializer was introduced:
init(dynamicProvider: @escaping (UITraitCollection) ->UIColor)
You can customize your own color, based on the userInterfaceStyle property from UITraitCollection:

extension UIColor 
{
	static func myColorForDark() -> UIColor 
	{
		if #available(iOS 13, *)
		{
			return UIColor.init 
			{ 
				(trait) -> UIColor in
				return trait.userInterfaceStyle == .dark ? UIColor.darkGray : UIColor.orange
			}
		}
		else 
		{
			return UIColor.blue
		}
	}
}

Don’t forget to enable high contrast as well.

As you can see in the below picture, we have defined four different variants for one color. Again, I strongly suggest using System and Semantic Colors as much as possible:

Step 2: Images

SF Symbols:

Apple introduced SF Symbols at WWDC19. SF Symbols is a huge collection of glyphs (over 1500!) that are available for developers to use in their own apps.

Apple itself uses SF Symbols in each stock app like Reminders, News, Maps and others.
You can fetch any of them by using the new API UIImage(systemName:)

_ = UIImage(systemName: "star.fill")

Like SF Symbols, template images are monochrome images that are defined in our Xcode assets catalog by selecting “render as” Template Image in the Attribute Inspector. By using them, you get several advantages. To name one, you gain dark mode for free.

let myGlyphImage = UIImage(named: "myGlyph")<br>let myGlyphImageView = UIImageView(image: myGlyphImage)<br>myGlyphImageView.tintColor = .systemBlue

Other Images:

For all other kind of images that are not template images or symbols, such as photos and more, we can follow the same steps as for custom colors: set their appearance to any, Dark in the asset catalog and drop a new variant for each appearance.

We can see in the below image how to set image for dark and light mode, we also see that how we can set simulator far dark mode:

As you can see in the above picture how images have a look in dark mode and light mode.

Dynamic Images are automatically resolved by UIImageView but if we need to resolve our UIImage independently we can do so by accessing the imageAsset property on our UIImage.

let myDarkImage = UIImage(named: "SunAndMoon")<br>let asset = myDarkImage?.imageAsset<br>Let resolvedImage = asset?.image(with: traitCollection)

Detecting Dark Mode:

There could be some cases in which you want to detect appearance changes programmatically and change your user interface accordingly

func ViewChanges()
{
	if(traitCollection.userInterfaceStyle == .dark)
	{
		MoodShift.text = "Night Mode"
	}
	else
	{
		MoodShift.text = "Light Mode"
	}
}
override func traitCollectionDidChange(_ previousTraitCollection: UITraitCollection?)
{
	super.traitCollectionDidChange(previousTraitCollection)

	let userInterfaceStyle = traitCollection.userInterfaceStyle// Either .unspecified, .light, or .dark
	// Update your user interface based on the appearance
	print(userInterfaceStyle)
	ViewChanges()
}

Specific Screens:

To override the user interface style, just override this variable in the top view or view controller and it will propagate down to subviews:

import UIKit
class CheckViewController: UIViewController 
{
	override func viewDidLoad() 
	{
		super.viewDidLoad()

		// Always adopt a Light interface style.
		overrideUserInterfaceStyle = .light
		// Do any additional setup after loading the view.
	}
}

Step 3: Drawing Attributed Text

If we are using Attributed Text, then we must have to use .foregroundColor property. Otherwise it set to black color and uses UIColor.label for correct results. As you can see in the pictures below that what happens if we don’t use .foregroundColor property.

When drawing attributed text, if not specified, the .foregroundColor property is set to .black:
set it to a proper color instead (e.g. UIColor.label).

let textDraw = "This text is an attributed string."
let attributes: [NSAttributedString.Key: AnyObject] = [ .font: UIFont.preferredFont(forTextStyle: .title3), .foregroundColor: UIColor.label]
textDraw.draw(at: CGPoint.zero, withAttributes: attributes)

A Deeper Look:

If your app completely relies on storyboards for the UI, then congratulations!
You’re now set to fully support Dark Mode.
Not all of us are this lucky, if you’re not among these people , read on.

Behind The Scenes: Draw Time

iOS picks the right tint/image of our dynamic colors/images at draw time: but when is “draw time” exactly

As you know, our views can become invalid at some point in their lifetime:

  • Maybe the user has rotated the screen.
  • Maybe a UIView needs to add a new element in the interface, etc.

You’re always guaranteed to have iOS pick the right tint/material/image when you’re inside any of the following methods:

  • UIView
    • draw()
    • layoutSubviews()
    • traitCollectionDidChange()
    • tintColorDidChange()
  • UIViewController
    • viewWillLayoutSubviews()
    • viewDidLayoutSubviews()
    • traitCollectionDidChange()
  • UIPresentationController
    • containerViewWillLayoutSubviews()
    • containerViewDidLayoutSubviews()
    • containerViewDidLayoutSubviews()

Dark Mode In CALayers:

To use dynamic colors outside of these methods you might need to manage the UITratCollection. This is needed when working with lower level classes such as CALayer, CGColor etc.

let layer = CALayer()
// get the current traitCollection used for our view
let traitCollection = view.traitCollection
traitCollection.performAsCurrent 
{
	layer.borderColor = (UIColor.self as! CGColor)
}
// Do any additional setup after loading the view.

Roadmap to start implementing Dark Mode:

  1. Download and install Xcode 11 beta
  2. Build and Run your app with dark mode enabled
  3. Fix the obvious “mistakes” spotted
  4. Add dark variants to all your assets
  5. Make sure to set the foreground key when drawing attributed text
  6. Move all your appearance logic in the “Draw time” functions
  7. Adapt Dark Mode one screen at a time:
    • Start from the .xibs files
    • Move to storyboards
    • Move to code
    • Repeat for all screens

Do you want us to build an App for you ?

GIFs are gaining popularity because, like memes, they’re useful for communicating jokes, emotions, and ideas. These are animated images but aren’t really videos. GIF files can hold multiple pictures at once, and people realized that these pictures could load sequentially. The format supports up to 8 bits per pixel for each image, allowing a single image to reference its own palette of up to 256 different colors chosen from the 24-bit RGB color space. It also supports animations and allows a separate palette of up to 256 colors for each frame. These palette limitations make GIF less suitable for reproducing color photographs and other images with color gradients, but it is well-suited for simpler images such as graphics or logos with solid areas of color. Unlike video, the GIF file format does not support audio.

Core Image is a powerful framework that lets you easily apply filters to images. It can use either the CPU or GPU to process the image data speedily. You typically use CIImage objects in conjunction with other Core Image classes such as CIFilter, CIContext, CIColor, and CIVector. CIImage object has image data associated with it, it is not an image. Lets dig deep into how can we open, access, and even create a GIF using  CIImage framework.

Getting Started

Before starting we expect that you are familiar with Objective-C which is used as primary language. This code can be converted into Swift code easily by the one who has enough knowledge of Swift.

GIF filters process will involve following steps which we will go through one by one:

  1. XCode Project Creation
  2. Render or Play GIF image
  3. Extract GIF images
  4. Apply effects on extracted images
  5. Merge to create final GIF

1) XCode Project Creation

Open up XCode and create a single view new project “GifHandler”. User Storyboard to design application UI and select Objective-C for code syntax.

2) Render or Play GIF Animation

Proceeding further, we need to select GIF image to play its animation. It can be from any source like Photo library. Add a UILabel and UIButton on “Select GIF Scene”. This UIButton will trigger action to open device photo library using SDK framework. In ViewController.m add IBAction

- (IBAction) showImagePickerForLibrary:(id)sender

and connect it with UIButton just created.

We will present UIImagePickerController to get GIF image from library. Once, photo picker is launched, select desired GIF image and continue.Implement Photo Picker delegate method imagePickerController:(UIImagePickerController*)pickerdidFinishPickingMediaWithInfo:

[imageAssetrequestContentEditingInputWithOptions:optionscompletionHandler:^(PHContentEditingInput *contentEditingInput, NSDictionary *info) {

            BOOL bFileExist = false;
            NSURL *sourceFileURL = contentEditingInput.fullSizeImageURL;
            if (sourceFileURL)
bFileExist = [[NSFileManagerdefaultManager] fileExistsAtPath:sourceFileURL.path];

            if (bFileExist)
            {
                [self proceedToSaveAndDisplayGif:sourceFileURL];
            }
            else
            {
                // Proceed to get it from request image data
                [self performSelectorInBackground:@selector(getCanvaseImageFromPHAssetIfURLNotFound:) withObject:imageAsset];
            }
        }];

Photo picker provides PHAsset object. We need to get image data from it. Either we get physical source url using “requestContentEditingInputWithOptions” method of phasset or we get it from “PHImageManagerrequestImageDataForAsset” method. We will then save GIF image to our processing directory.

Now, render the saved GIF image at our processing directory. As GIF is a sequence of images so from iOS 13.0,SDK is providing CGAnimateImageAtURLWithBlock method to animate GIF images. Callback of method will keep on assigning sequence of images to pDisplayImageView until it is stopped.

- (void) displayGifImageAtPath:(NSString*)sourceFilePath
{
    self->sourceFilePath = sourceFilePath;
NSLog(@"SourcePath: %@", self->sourceFilePath);
    if (@available(iOS 13.0, *)) {

newImageAnimator = [[ImageAnimatoralloc] init];
        [newImageAnimatoranimateImageAtURL:[NSURL fileURLWithPath:sourceFilePath] onAnimate:^(UIImage * _Nullable img, NSError * _Nullable error) {

            if (!self->newImageAnimator.stopPlayback)
                self->pDisplayImageView.image = img;
        }];
    }
    else
    {
pDisplayImageView.image = [UIImageanimatedImageWithAnimatedGIFURL:[NSURL fileURLWithPath:sourceFilePath]];
    }
}

Create a new Objective-C file “ImageAnimator” using class “NSObject” and implement CGAnimateImageAtURLWithBlockmethod in it. Now create its object in displayGifImageAtPath method and start animating GIF. The animation can be stopped by setting the Boolean parameter of the block to false.

[newImageAnimatorsetStopPlayback:YES];


- (void)animateImageAtURL:(NSURL *)urlonAnimate:(onAnimate)animationBlock
{
//    __weak typeof(self) weakSelf = self;
NSDictionary *options = [self animationOptionsDictionary];

    if (@available(iOS 13.0, *)) {
CGAnimateImageAtURLWithBlock((CFURLRef)url, (CFDictionaryRef)options, ^(size_t index, CGImageRef  _Nonnull image, bool * _Nonnull stop) {
            *stop = self.stopPlayback;
animationBlock([UIImageimageWithCGImage:image], nil /* report any relevant OSStatus if needed*/);
        });
    } else {
        // Fallback on earlier versions
    }
}

3) Extract GIF Images

As GIF is sequence of images so we will be now extracting all images from it. Add a new file view controller and give it name ApplyEffectsViewController to project. We will do GIF images extraction and effects apply on ApplyEffectsViewController.

Add UIButton with title “Next” in Select GIF scene and attach it to action to move to ApplyEffectsViewController and pass source path of GIF to it.

ApplyEffectsViewController *UIVC = [storyboard instantiateViewControllerWithIdentifier:@"ApplyEffectsViewController"];
UIVC.sourceFilePath = self->sourceFilePath;
[self.navigationControllerpushViewController:UIVCanimated:true];

In viewDidlLoad of ApplyEffectsViewController.m, initiate arraygifMergingPathsBeforeMerge.

gifMergingPathsBeforeMerge= [NSMutableArray new];
gifMergingPathsBeforeMerge = [self animatedSequenceOfImagesOfGIFImageURL:gifURL];

This array will hold extracted image paths of GIF in our local directory. iOS SDK provides a method CGImageSourceCreateImageAtIndexavailable in Image I/O framework. It creates a CGImage object for the image data associated with the specified index in an image source.

NSData *imageData = [NSDatadataWithContentsOfURL:gifURL];
CGImageSourceRef source = CGImageSourceCreateWithData((CFDataRef)imageData, nil);
size_timagesCount = CGImageSourceGetCount(source);

Get number of images available in GIF image and then loop on all available images.

for (size_ti = 0; i<imagesCount; i++) {
CGImageRefcgImage = CGImageSourceCreateImageAtIndex(source, i, nil);
NSString *writePath = [NSStringstringWithFormat:@"%@-%lu.jpg",pathDirectory,i+1];
                BOOL isWritten = [self CGImageWriteToFile:cgImageandPathString:writePath];
                int delay = delayCentisecondsForGifImageAtIndex(source, i);
                if (isWritten)
                {
[images addObject:@{@"path":writePath, @"duration":[NSNumbernumberWithInt:delay]}];
                }
            }

For each CGImageRef obtained from CGImageSourceCreateImageAtIndex, we will write this image ref to local directory. If you are confident the memory isn’t a problem, you can skip this

CGImageDestinationRef destination = CGImageDestinationCreateWithURL(url, kUTTypeJPEG, 1, NULL);
CGImageDestinationAddImage(destination, image, nil);
CGImageDestinationFinalize(destination);
CFRelease(destination);

GIF animation delay time is one of the important values to save. It is the amount of time, in seconds, to wait before displaying the next image in an animated sequence.From image source ref, extract its properties.And get GIF property dictionary kCGImagePropertyGIFDictionary value and extract delay time value from it. We need to extract this value in seconds for each indexed image.

CFDictionaryRef const gifProperties = CFDictionaryGetValue(properties, kCGImagePropertyGIFDictionary);
NSNumber *number = fromCFCFDictionaryGetValue(gifProperties, kCGImagePropertyGIFUnclampedDelayTime);
delayCentiseconds = (int)lrint([number doubleValue] * 100);

Now after saving image and getting delay time value, we append these values as dictionary in array. This array will be assigned to gifMergingPathsBeforeMerge array.

[images addObject:@{@"path":writePath, @"duration":[NSNumbernumberWithInt:delay]}];

Once array is filled with extracted images dictionary value now we are ready to use this array to apply effects

4) Apply effects on extracted images

Before applying any effect/filter, we need to add UIImageView to scene controller to see how our applied effect looks.

Tap on storyboard and add UIImageView to Apply Effects Scene. Instead of displaying all GIF images, we show first index of extracted images.

There are many CIImage filters that can be applied but in this blog we will apply some filters. Add UIButtons to scence controller and attach to respective actions.We will discuss Sepia filter to explain things.

Applying any effect/filter on GIF image means applying it on all sequence of images. So what we need to do isto loop on all extracted images and apply filter on each individual image and saving to another local directory to avoid over writing of source images. gifMergingPathsAfterMergewill hold saved image paths.

gifMergingPathsAfterMerge = [NSMutableArray new];
NSDictionary *tDict = gifMergingPathsBeforeMerge[gifMergingCurrentIndex];
NSURL *writtenPath = [self mergeEffectsInSourceImageFromURL:[NSURL fileURLWithPath:tDict[@"path"]]];

Filters are to be applied on CIImage so first we need to create it from path

Directly loading CIImage from URL may contain invalid orientation,that’s why we need to change it to UIImage and then CIImage. Also Correct the orientation of image if required.

UIImage *imageObjectBeforeMerge = [[UIImagealloc] initWithContentsOfFile:fromURL.path];
CIImage *ciImageToMerge = [[CIImagealloc] initWithImage:imageObjectBeforeMerge];
ciImageToMerge= [self sepiaFilterImage:ciImageToMerge withIntensity:0.9];

Once CIImage object is available apply sepia filter on it.

CIFilter* sepiaFilter = [CIFilterfilterWithName:@"CISepiaTone"];
[sepiaFiltersetValue:inputImageforKey:kCIInputImageKey];
[sepiaFiltersetValue:@(intensity) forKey:kCIInputIntensityKey];
return sepiaFilter.outputImage;

Output image CIFilter is a ciimage so we assign it our ciimage object “ciImageToMerge”.Now let’s save this ciimage object. We create CIContext  and using its write representation method to save image.

BOOL bRes = [context writeJPEGRepresentationOfImage:ciImageToMergetoURL:urlToWritecolorSpace:CGColorSpaceCreateDeviceRGB() options:compressOptions error:&jpgError];

Once image is saved, we update our local array with saved path.

NSString *duration = tDict[@"duration"] ?tDict[@"duration"] : @"0";
[gifMergingPathsAfterMergeaddObject:@{@"path":writtenPath.path, @"delay":duration}];

When filter is applied on all extracted images, we proceed to create final merged GIF.

5) Merge to create final GIF

Up till now we have applied sepia filter on all sequence images of source GIF and have populated gifMergingPathsAfterMergearray with saved paths. Now we proceed to create one complete GIF from these paths.

First we need to create destination image ref

CGImageDestinationRef destination = CGImageDestinationCreateWithURL((__bridge CFURLRef)fileURL, kUTTypeGIF, kFrameCount, NULL);

And set its properties

CGImageDestinationSetProperties(destination, (__bridge CFDictionaryRef)fileProperties);

Loop on all gifMergingPathsAfterMerge objects and extract path and delay count values and add in destination image

for (NSUIntegeri = 0; i<kFrameCount; i++) {
NSLog(@"%@",imgArray[i]);
NSDictionary *tempDict = imgArray[i];
            int delay = [tempDict[@"delay"] intValue];
NSString *path = tempDict[@"path"];
            float deleyInCenti = (float)delay/100.0;


NSDictionary *frameProperties = @{
                                              (__bridge id)kCGImagePropertyGIFDictionary: @{
                                                      (__bridge id)kCGImagePropertyGIFDelayTime: [NSNumbernumberWithFloat:deleyInCenti] // a float (not double!) in seconds, rounded to centiseconds in the GIF data
                                                      }
                                              };

UIImage *image = [UIImageimageWithContentsOfFile:path];
CGImageDestinationAddImage(destination, image.CGImage, (__bridge CFDictionaryRef)frameProperties);
    }

Finalize method will create a single GIF image at destination path. It writes image data and properties to the data, URL, or data consumer associated with the image destination.
For user to preview, show the 1st image.

NSDictionary *tDict = imgArray[0];
NSString *sourceImagePath = tDict[@"path"];
self->pImageView.image = [UIImageimageWithContentsOfFile:displayImagePath];

Final GIF with applied filter is ready on destination path. We can now save it our photo galley as well. Add Export to Library UIButton to scene and in its action use PhotoLibrary perform changes method

[[PHPhotoLibrarysharedPhotoLibrary] performChanges:^{
        [PHAssetChangeRequestcreationRequestForAssetFromImageAtFileURL:self->pExportImageURL];
    } completionHandler:^(BOOL success, NSError *error) {
}

Using this process we can apply as many filters we like. Hope you get better understanding of how filter can be applied to GIF image. For complete source code you can visit this link.

https://github.com/whizpool/gif-handler.git

Do you want us to build an App for you ?

Communication is integral to human life, so is the effective meetings to professional life. Running effective meetings is very critical for every organization. Similarly, in software companies, it is very important to keep everyone up-to-date about every stage of project; whether it’s the client or Solution Architects, Developers, QA Engineers, UI/UX Designers, Team Leads and Project Managers to productively deliver your projects.

Following are the most powerful tips to run effective meetings that will definitely set you up for success.

Always set the agenda:

Always set and email the agenda to attendees at least one day prior to the meeting. It will provide sufficient time to attendees to prepare for the meeting. If the time constraint is tight, the agenda must be clearly defined for the meeting when setting it up.

Start and end on time:

The starting and ending time of the meeting should be decided earlier at all the times and better to mention in the email about expected time leeway for the meeting. Once a meeting time is set, it should be started on time and timeline of the meeting should be esteemed by all means. In case there is a need to prolong the meeting, one should first get the approval of all the members before extending it, because there might be a chance that someone has other significant tasks to perform right after the meeting.

Take notes for yourself:

Never forget to keep your writing pads with you while attending a meeting. Taking notes on your computers might not be a good idea as it will feast a feeling that probably you are getting busy in your computers, catching up on emails or messages. The key purpose to take notes for yourself in a meeting is to record any queries or tasks that have been directed to you.

Follow up on the meeting:

Determination is a key influence skill, if you want something to really happen, you must always follow up. To ensure productivity doesn’t slow you down after you leave the meeting room, instantly float meeting notes and follow up on the commitments made or you will end up without clarity about what was agreed upon.

Conclude with an actual Action Plan:

Always save the last few minutes of your meetings to discuss the next steps to be taken. Before walking out the meeting room, clear discussion should be done that what everyone agreed upon and who is responsible for what. Otherwise all the time you spent on meeting will be go in vain.

Some other vital points to consider are:

  • No use of cellphones during meetings.
  • Always come prepared.
  • Always come on time, better to be in meeting room before 5 minutes of the actual time setup for the meeting.
  • Always be focused on topic and be clear and concise. Attendees should avoid side conversations and pay attention
  • If you don’t agree with someone, disagree but defend your right to say it without being disrespectful to others
  • Never forget the Q&A session.

If you uphold all of these habits, you will see that meetings are most effective tool to get work done.

Do you want us to build an App for you ?

Do you want us to build an App for you ?