Craig Wilson

Craig Wilson

Principal UX Architect and Strategist

I create UX Strategies and Visions, Policies, Governance and Lean UI / UX Processes used in the development of amazing User Centric software products. I have over 25 years of experience in IxD / UI / UX / CX and Human Factors Engineering. I have designed, developed, and released numerous high traffic websites and software applications. Some of these applications have over 1,000,000 distinct users each day.

I am very proficient at identifying business, market, and user trends. I design inclusive of these objectives, then validate my designs with various methods of Usability Testing. I use optimized Lean UX / Agile processes with my UI / UX Development teams to create amazing products.

I have overseen teams across 4 continents. My teams have been, and are located in Buenos Aires - Argentina, Kuala Lumpur - Malaysia, Delhi - India, Dublin - Ireland, Guangzhou - China, Canada as well as across the United States.

Understanding Human Computer Interaction

What Is Human Factors Engineering?

Great question! I have been asked this question by Recruiters, Clients, Interns and C-Level Execs over the years so I'm going to give you a quick run-down of what Human Factors Engineering really is...

Ok, well maybe not quick... Human Factors Engineering is the discipline of optimizing human performance in the workplace by combining a wealth of knowledge, primarily from the disciplines of psychology, heuristic analysis and ergonomics. It considers the working environment from a human-centered viewpoint looking at an entire system and its influence on the way people behave and interact to, with or in it. From the evolution of these concepts the "User Centered Design" methodology was born.

Download this awesome pdf. It has a great detailed information on UCD methodology. It was written by Pascal Raabe of the UK.User Centerd Design by 'Pascal Raabe'

The User Centered Design Process

User Centerd Design Process

Typical UCD Method - Steps and Work Flow

Analysis Phase

  • Meet with key stakeholders to set vision
  • Include usability tasks in the project plan
  • Assemble a multidisciplinary team to ensure complete expertise
  • Develop usability goals and objectives
  • Conduct field studies
  • Look at competitive products
  • Create user profiles / personas
  • Develop a task analysis
  • Document user scenarios (case studies)
  • Document user performance requirements

Design Phase

  • Begin to brainstorm design concepts and metaphors
  • Develop screen flow and navigation model
  • Do walkthroughs of design concepts
  • Begin design with paper and pencil
  • Create low-fidelity prototypes
  • Conduct usability testing on low-fidelity prototypes
  • Create high-fidelity detailed design
  • Do usability testing again
  • Document standards and guidelines
  • Create a design specification

Implementation Phase

  • Do ongoing heuristic evaluations
  • Work closely with delivery team as design is implemented
  • Conduct usability testing as soon as possible

Deployment Phase

  • Use surveys to get user feedback
  • Conduct field studies to get info about actual use
  • Create an "Affinity" board to organize data from field studies
    • Download this pdf I compiled "Affinity Diagramming" if you'd like to understand and use the Affinity process during your UCD / Usability Evaluations
  • Check objectives using usability testing

Remember, you are Designing the Experience not just the Product...

Designing the Experience vs Designing the Product Designing the Experience vs Designing the Product Designing the Experience vs Designing the Product

UX 'vs' UI

Visualizing the differences between UX and UI

What is UX? Isn’t that just UI? It is common for people, even inside tech circles, to use the acronyms UX and UI as synonyms, but they are quite different. UX stands for “user experience” while UI stands for “user interface.”

User experience (UX) the way a person feels about using a product, system or service. User experience highlights the experiential, affective, meaningful and valuable aspects of human-computer interaction and product ownership, but it also includes a person’s perceptions of the practical aspects such as utility, ease of use and efficiency of the system. User experience is subjective in nature, because it is about an individual’s feelings and thoughts about the system. User experience is dynamic, because it changes over time as the circumstances change.

A user interface (UI) the system by which people (users) interact with a machine. The user interface includes hardware (physical) and software (logical) components. User interfaces exist for various systems, and provide a means of: (a) Input, allowing the users to manipulate a system, (b) Output, allowing the system to indicate the effects of the users’ manipulation.

Need a great visual...

Still too broadly defined for you? Need something more concrete? Designer Ed Lea designed a series of photographs which create a very simple visualization that define the differences between UX and UI, revealing how they are not, in fact, synonyms, and demonstrates how they relate to a product.

UI vs UX

See? That isn’t too complicated. Now, when you go to that tech happy hour and someone says that they do “UX and UI,” you know what they are talking about and can ask relevant questions (and in some cases, you’ll spot the acronyms being used interchangeably and you will know the difference).

**Download this is a great info pdf called "UX is not UI". It explains the differences very well: UX is not UI

This info graphic below gives a pretty basic explanation of the emphasis UX Designers and UI Designers place in respect of their specific disciplines.

UI vs UX


Before Contextual Inquiry (Field Interviews) have a Brainstorming Session

Brainstorming is a technique used by businesses and individuals alike to generate new and creative ideas. Brainstorming is supposed to banish traditional or safe ways of thinking in favor of more spontaneous and often more insightful ones. The techniques are based on the premise that too much analysis is sometimes responsible for the clogging of unique ideas and relies on fast responses to avoid getting bogged down in too much thought.


The Six Sides Brainstorming Technique

This is my favorite technique for a rapid "Brainstorming" session. The goal is to keep track of and to organize the ideas presented and discussed in a logically written format.

Six-sided brainstorming takes the approach that each member of the team is given the topic, and each have to apply six methods to this topic. Each member will have to describe, analyze, compare, associate the topic with other things, apply the topic practically, and either argue for or against the topic. This gives everyone brainstorming on the collaboration the chance to apply their own thoughts and opinions to the topic at hand.

How It Works...

Six sides brainstorming involves asking six questions of your topic. The questions you will ask will help you and your team to further define the topic. This is especially helpful if you are trying to determine a project's scope or how you will collaborate on a project together. Here are the questions:

  • How will you describe the topic?
  • Can you compare the topic with another topic?
  • What can the topic be associated with?
  • What happens when you analyze the topic for the various dimensions?
  • What can the topic be applied to?
  • Can you argue for or against the topic?

By answering these questions, you can help to open up avenues that you may not have previously thought of when it comes to the topic at hand.

I have provided a pdf document here that will help you follow the "Six Sides Brainstorming" technique: The Six Sides Brainstorming Technique

Contextual Inquiry

We Are The User's Advocate

Contextual Inquiry is a method where users are observed in real life situations to discover how they work and what they need. We've all seen the example here:

Contextual Inquiry

Successful contextual inquiry (field interviews), eliminate a great deal of wasted time and development cost. If requirements gathering is done thoroughly during a contextual interview, we'll understand the user's vision before we start down the design and development path.

Conducting Contextual Inquiry

Contextual Inquiry can be challenging to conduct so here are 3 key areas that can often go wrong if not considered properly in advance.

The Interview

  • Time: Plan in as much time as possible, it will take longer than you expect.
  • The time you decide to spend in the users’ context, be it their place of work or leisure, should be carefully decided. While you may be tempted to be there with the users for a short, specific amount of time it’s a good idea to spend as much time as possible. Why? Well let’s say you’ve decided to spend 2 hours with customer care executives and your task is to study how users interact with the portal. From those 2 hours deduct the hello time, explaining why you are there, the coffee time and the settling down time. Also, deduct the time you would spend getting to grips with what is happening around you. Then deduct the time that is needed for the participant to get used to someone being around and inquiring about their day to day tasks. How much are you left with? Even though Contextual Inquiry may seem like a lengthy and time consuming, it is very insightful method and actually quite an enjoyable experience.

  • Participants: Make sure you really know who you need to research and cover the variables.
  • As with all user research, the participants should be representatives of the deemed user groups. To do this properly, make sure it is discussed in depth using the affinity maps as further guidance. You may think you know which department needs to be researched to improve customer satisfaction or what profile customer you need to research but a well worked-through affinity map could highlight a few other user or customer groups that hadn't been considered. Recruitment needs to be specific and well documented to ensure results can be analysed properly according to the profiles of the participants.

  • Your Script or Research Plan: Should only be semi structured as you never know what you might find.
  • A fully structured plan could be restrictive and concentrate solely on getting answers to the questions written down. The important thing is to go with the users flow, stay focused on the tasks that the user is doing, and discuss any queries as you go along (without interrupting natural behavior). Another problem area when conducting Contextual Inquiry is trying to interpret as you listen. Finish the entire conversation and then spend time analyzing and coming up with implications for design afterwards. This can be difficult as it is normal human behavior to process and analyze whilst taking notes. A good tactic is to separate facts from any assumptions or interpretations in your notes by using a different colored pen or highlighter, a different kind of bullet point, or simply use quotation marks to indicate the direct words used by your participant. When you get back to your notes you know exactly what the participant said and you can interpret and analyze it accurately

Heuristic Evaluation

What is a Heuristic Evaluation?

Another Great question! Here's a brief answer...

Heuristic Evaluation consists of a 247 item checklist taken from international standard; "ISO 9241-151 Ergonomics of human system interaction / Part 151 Guidance on World Wide Web user interfaces" which represents important areas of good practice for any website.

User Centerd Design

Heuristic Evaluation Principles

The Ten Basics

  • Visibility of System Status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
  • Match between System and the Real World: The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
  • User Control and Freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
  • Consistency and Standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
  • Error Prevention: Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
  • Recognition rather than Recall: Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
  • Flexibility and Efficiency of Use: Accelerators-unseen by the novice user-may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
  • Aesthetic and Minimalist Design: Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
  • Help users Recognize, Diagnose, and Recover from Errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
  • Help and Documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.
    • This pdf file lists the 10 Heuristic Evaluation basics and gives you a detailed checklist you can follow as an Evaluator to make sure your evaluation is thorough: Heuristic Evaluation Checklist.

Cognitive Walkthrough

Making Heuristic Evaluation Count

Cognitive Walkthrough involves one or a group of evaluators inspecting a user interface by going through a set of tasks and evaluate its understandability and ease of learning. The user interface is often presented in the form of a paper mock-up or a working prototype, but it can also be a fully developed interface. The input to the walkthrough also include the user profile, especially the users' knowledge of the task domain and of the interface, and the task cases. The evaluators may include human factors engineers, software developers, or people from marketing, documentation, etc. This technique is best used in the design stage of development. But it can also be applied during the code, test, and deployment stages.

Cognitive Walkthrough

Cognitive Walkthrough Principles

Defining the Input to the Walkthrough

The Procedure

  • Who will be the users of the system? This should include specific background experience or technical knowledge that could influence users as they attempt to deal with a new interface. The users' knowledge of the task and of the interface should both be considered. An example user description is "Macintosh users who have worked with MacPaint".
  • What task(s) will be analyzed? In general, the system should be limited to a reasonable but representative collection of benchmark tasks. Task selection should be based on the results of marketing studies, needs analysis, concept testing, and requirements analyses.
  • What is the correct action sequence for each task? For each task, there must be a description of how the user is expected to view the task before learning the interface. There must also be a description of the sequence of actions that should accomplish the task with the current definition of the interface. Example actions are: "press the RETURN key", "move cursor to 'File' menu". It can also be a sequence of several simple actions that a typical user could execute as a block, such as, "Select 'Save' from 'File' menu".
  • How is the interface defined? The definition of the interface must describe the prompts preceding every action required to accomplish the tasks being analyzed, as well as the reaction of the interface to each of these actions. If the interface has been implemented, all information is available from the implementation. Earlier in the development process, the evaluation can be performed with a paper description of the interface. For a paper description, the level of detail in defining the interface will depend on the expertise that the anticipated users have with existing systems.

Walking Through the Actions

The analysis phase consists of examining each action in the solution path and attempting to tell a credible story as to why the expected users would choose that action. Credible stories are based on assumptions about the user's background knowledge and goals, and on an understanding of the problem-solving process that enables a user to guess the correct action.

As the walkthrough proceeds, the evaluators ask the following four questions:

  • Will the users try to achieve the right effect? For example, their task is to print a document, but the first thing they have to do is select a printer. Will they know that they should select a printer?
  • Will the user notice that the correct action is available? This relates to the visibility and understandability of actions in the interface.
  • Will the user associate the correct action with the effect to be achieved? Users often use the "label-following" strategy, which leads them to select an action if the label for that action matches the task description.
  • If the correct action is performed, will the user see that progress is being made toward solution of the task? This is to check the system feedback after the user executes the action.

The evaluator(s) will try to construct a success story for each step in the task case(s). General conditions where a success story can be told is given next in **"common features of success" below. When a success story cannot be told, construct a failure story, providing the criterion (one or more of the four questions above) and the reason why the user may fail.

**Common Features of Success

Users may know "what effect to achieve":

  • Because it is part of their original task, or
  • Because they have experience using a system, or
  • Because the system tells them to do it.

Users may know "an action is available":

  • By experience, or
  • By seeing some device (like a button) or
  • By seeing a representation of an action (line a menu entry).

Users may know "an action is appropriate" for the effect they are trying to achieve:

  • By experience, or
  • Because the interface provides a prompt or label that connects the action to what they are trying to do, or
  • Because all other actions look wrong.

Users may know "things are going OK" after an action:

  • By experience, or
  • By recognizing a connection between a system response and what they were trying to do.

Reference: C. Wharton et. al. "The cognitive walkthrough method: a practitioner's guide" in J. Nielsen & R. Mack "Usability Inspection Methods" pp. 105-140.

+ Heuristic Evaluation vs Cognitive Walkthrough

The Hybrid Approach

This pdf explains the difference between the two and offers a pretty nice HE / CE Hybrid process. This is worth your time to explore:

Using Heuristics

Lean UX 'vs' Agile UX

Lean UX. Is it *really* about start-ups or something more profound?

This is a great infographic in describing the differences between Lean UX and Agile UX. It’s a great look at the core issues facing these emerging disciplines and gets past some of the buzz words to answer the question “is there a difference” or are they the same thing?

Lean UX and Agile UX

Lean and UX

Lean is focussed on elimination of waste in processes that don’t contribute to the creation of value for the end customer. This may be in an actual product, or the services that surround it pre- and post-sales. It also encompasses services. The types of waste that Lean seeks to remove are:

  • Transport - moving products that are not actually required to perform the processing.
  • Inventory - all components, work in process and finished product not being processed.
  • Motion - people or equipment moving or walking more than is required to perform the processing.
  • Waiting - waiting for the next production step.
  • Over-production - production ahead of demand.
  • Over-processing - resulting from poor tool or product design creating activity.
  • Defects - the effort involved in inspecting for and fixing defects.

Lean Start-ups and UX

Lean applied to start-ups, as first defined by Eric Ries, therefore, seek to launch businesses and products, that rely on validated learning, scientific experimentation, and iterative product releases to shorten product development cycles, measure progress, and gain valuable customer feedback. In this way, companies, especially start-ups, can design their products or services to meet the demands of their customer base without requiring large amounts of initial funding or expensive product launches.

Agile and UX

Agile, on the other hand, is purely product focussed. It not only applies to software, but agile methods like Scrum are also used in the creation of medical and financial products. Scrum employs collaboration to discern what is of value to a range of users, and encompasses Deming Cycles to continuously improve a team’s capabilities, but it doesn’t specifically seek out to remove waste in the way that Lean defines it. It’s process also doesn’t necessarily seek out a great experience for its products, only that they meet, in Scrum’s case, the Product Owner’s acceptance criteria (“Definition of Done”). UX has a natural place in agile methods to help ensure that users’ needs are taken into consideration in creating products. Products that have a great experience sell better — just look at iPhone and Samsung Galaxy III sales! To extend this exemplar, the intrinsic difference between Lean and Agile is that Agile creates the smartphone, but Lean would assess post- and pre- processes to identify what parts of the that process don’t add value to the end customer experience of that smartphone.

Beyond Agile and Lean UX

While Lean and Agile have different focuses, they are complementary. Lean thinking can improve product delivery in Agile projects, just as Scrum can be applied to the building of a product to reduce waste and increase speed to market. UX has a place with each but, in my mind, is more powerful as the glue that binds each of these ways of working to ensure that the end-user’s experience, where ever it occurs, is optimised both for their enjoyment as well as business profit. The key to this marriage of processes lies in agile’s collaboration, lean’s focus on the ecosystem, and UX’s reinforcement on human-centred design.

Lean, Agile and UX


Over a decade ago the community were grappling with defining what on earth “information architecture” actually was. Today, those waters continue to be muddied with buzz-words like agile and lean entering our vocabulary from different disciplines and domains. The truth is that as the reach of UX expands and it becomes obvious that an understanding of customers and users, and including that knowledge within service or product design, only serves to increase the value of those products and services. This is the human-centred design focus that the minds of Steve Jobs and others like him brought to the IT industry. Where Lean helps us to focus on entire ecosystems of processes, agile helps us build those products and improve speed to market. Both, though, need this human-centred approach to be successful in our 21st century world.

A / B Testing

What is A/B Testing?

A/B testing (sometimes called split testing) is comparing two versions of a web page to see which one performs better. You compare two web pages by showing the two variants (let's call them A and B) to similar visitors at the same time. The one that gives a better conversion rate, wins!

A/B testing

Why Should You A/B Test?

A/B testing allows you to make more out of your existing traffic. While the cost of acquiring paid traffic can be huge, the cost of increasing your conversions is minimal. To compare, a Small Business Plan of Visual Website Optimizer starts at $49. That's the cost of 5 to 10 Google Adwords clicks. The Return On Investment of A/B testing can be massive, as even small changes on a landing page or website can result in significant increases in leads generated, sales and revenue.

What Can You Test?

Almost anything on your website that affects visitor behavior can be A/B tested.

Some elements that you can easily test are:

A/B testing
  • Headlines
  • Sub headlines
  • Paragraph Text
  • Testimonials
  • Call to Action Text
  • Call to Action Button
  • Links
  • Images
  • Content near the fold
  • Social proof
  • Media mentions
  • Awards and badges

Advanced tests can include pricing structures, sales promotions, free trial lengths, navigation and UX experiences, free or paid delivery, and more.

The A/B Testing Process

The correct way to run an AB testing experiment (or any other experiment for that matter) is to follow the Scientific Method. The steps of the Scientific Method are:

  • Ask a question: "Why is the bounce rate of my website higher than industry standard?"
  • Do background research: Understand your visitors' behavior using Google Analytics and any other analytics tools running on your website.
  • Construct a hypothesis: "Adding more links in the footer will reduce the bounce rate".
  • Calculate the number of visitors/days you need to run the test for: Always calculate the number of visitors required for a test before starting the test.
  • Test your hypothesis: You create a site wide A/B test in which the variation (version B) has a footer with more links. You test it against the original and measure bounce rate.
  • Analyze data and draw conclusions: If the footer with more links reduces bounce rate, then you can conclude that increased number of links in the footer is one of the factors that reduces bounce. If there is no difference in bounce, then go back to step 3 and construct a new hypothesis.
  • Report results to all concerned: Let others in Marketing, IT and UI/UX know of the test results and insights generated.

Usability Testing

Usability Testing Methods

There was a time when we spoke of usability testing it meant expensive labs and one-way mirrors, not anymore. There are three core ways of running Usability Tests.

Formal Usability Lab

The Core Tests

  • Lab-Based: This is the classic approach to usability testing: users physically come to a lab, often with a one-way mirror and are observed by a team of researchers.
  • Remote Moderated: Users log into screen sharing software like GoTo Meeting and attempt tasks.
  • Remote Unmoderated Software like UserZoom, Loop11 or Webnographer walk participants through tasks and click paths are recorded.

Advantages and Disadvantages

Attribute Lab-Based Remote Moderated Remote Unmoderated
Geographic Diversity Poor: Limited to 1 (or a few) Locations Good: Users from across US and Globe can participate. TimeZone Difference is main drawback for international studies. Good: Users from across US and Globe can participate for times that are convenient to them.
Recruiting More difficult because the geographic pool is limited to the testing location. Easier because no geographic limitation but sessions are still longer. Easiest because no geographic limitation, shorter sessions.
Sample Quality Good Excellent: Limited to People willing to take time out of day. Tight control over user activity. Good Excellent: Able to recruit specialized users at minor inconvenience and can view most interactions. Fair Good: Often attracts people who are in it for the honorarium or people who try and game the system.
Qualitative Insights Excellent: Direct observation of both interface and user reactions. Facilitator can easily probe issues. Good: Direct observation of interface and limited user reactions. Facilitator can ask follow up questions and engage in a dialogue. Fair Good: If session recorded then direct observation of interface. No recording: Insights are gleaned from answers to specific questions.
Sample Size More Restricted due to geographic limitation and time. Less Restricted: Restricted by time to run studies but more flexible hours of scheduling. Least Restricted: Easy to Run Large Sample Sizes (100+).
Costs Most Expensive: Higher compensation costs for users and facilitator time. Less Expensive: User compensation is lower and requires less facilitation time and no facility costs. Least Expensive: Compensation is least expensive, doesn't require facilitation or facility costs.
Metric Quality Excellent: You can collect almost any measure (including eye-tracking) and task time. Good Excellent: Some metrics are limited (eye-tracking) but task-time data can still be collected. Good: Because you don't know what users are doing.
*Data from the 2012 UPA Survey

No one method is always best. A combination of methods provides a more comprehensive picture of the user experience. For example, I often combine a few lab-based participants or remote moderated participants when I conduct an unmoderated study. It provides the best of both worlds--rich interaction and discussion with larger sample sizes and a more diverse and representative group.

Combining is not always an option. In my experience the two biggest drivers of the method chosen are budget and sample size. If you want to test a lot of users (or test several user groups) but have a limited budget then remote unmoderated testing is usually the way to go. Conversely, for mobile testing, it's still largely a lab-based evaluation to capture swipes and screens.

To help guide what method to use consider what factors are most important in your research:

  • Could the product or website being tested see significant benefits by drawing responses from an international audience? (moderated remote)
  • Does the interface being tested require a more in-depth look at direct, in-person responses? (lab)
  • Is a single function being evaluated, where simple answers will satisfy simple questions? (unmoderated remote)
  • Are the tasks closed-ended and easy for participants to understand and attempt? (unmoderated remote)

A Simple Diagram of the Usability Testing Process

Diagram of the Usability Testing Process

I have provided a pdf document here of the top 24 tools and web sites that can help with all 3 Core Usability Tests:

What Makes Great Design?

Design Concepts

Good Design is "Obvious", Great Design is "Transparent"!

The answer is actually not as philosophical as one would like to make it out to be. Regardless of the chosen medium for final execution, designers agree that a great design must share three primary elements: Creativity, Functionality and Balance.

Creativity is: “the use of imagination and original ideas in the production of an artistic work” Obviously creativity is an essential element in great design. As ‘beauty is truly in the eye of the beholder’ we realize it is all about perception. Let’s consider product packaging. When shopping for a particular product in the grocery store, there are a number of brand selections to consider. One’s first impulse is to select a brand most recognize, why do we recognize it? What makes our choice stand out from among the rest? It more than likely has the same content values as the generic brand, yet it is more appealing. Aahh yes design… One perceives better quality(design) packaging means a more superior quality product, and that’s creative!

Functionality is: “the quality of being suited to serve purpose well” What is the functionality of design? A great design successfully articulates the intended marketing message and is functional in a variety of media. Factors to consider in the conceptual stage of design include, who the message is intended for, the best method of reaching them, and the rate of delivery success. Advances in technology continue to unveil new marketing resources and options to reach targets. From print collateral, advertising and promotion, to e-marketing, interactive and social media tools, a designers understanding and ability to decide the best suited delivery, results in required functionality!

Balance is: “a condition in which different elements are equal or in the correct proportions” How does balance affect design? Ponder a moment… Have you ever happened upon a design that was so jumbled and busy, that you missed the message in all the confusion? Sure you have… And although it was intended to deliver a message, translation is lost due to a lack of balance. Obviously depending on the target and mission, some design concepts are more complex and require increased visual stimulation to more accurately covey the intended message, and this is the exception. The purpose of any marketing effort is to reach the target and encourage action. Rule of thumb.. Keeping it simple will support the goal of achieving balance in your designs!

To Summarize

The reason I have made this information available to Recruiters, Clients, Interns and C-Level Execs?...

Embrace The Inner Geek!

This is my passion, my career, and I love doing it every day. I build Clean, Logical, User Centic Design for Web Sites, Mobile Devices, Web Applications and Software Applications. I mentor my teams and teach them do the same. A huge value add, I can provide the data to back up my designs so that your companies’ ROI (Return On Investment) can be projected before allocating development funds for any project. You’ll know whether your projects will succeed early on in the development process!