Research Archives | CodeGuru https://www.codeguru.com/research/ Tue, 04 Jan 2022 01:48:11 +0000 en-US hourly 1 https://wordpress.org/?v=6.3.2 Considerations for Setting Up an Enterprise Architecture https://www.codeguru.com/research/considerations-for-setting-up-an-enterprise-architecture/ Mon, 20 Dec 2021 07:15:00 +0000 https://www.codeguru.com/uncategorized/considerations-for-setting-up-an-enterprise-architecture/ The Enterprise Architecture Repository (EAR) is a TOGAF specification term intended to provide a single place for the storage and retrieval of solution architecture artifacts. EAR provides the capability to link architectural assets to components of the Detailed Design, Deployment, and Service Management Repositories. Artifacts belonging to this repository are created by using tools, and […]

The post Considerations for Setting Up an Enterprise Architecture appeared first on CodeGuru.

]]>
Enterprise Architecture Repository

The Enterprise Architecture Repository (EAR) is a TOGAF specification term intended to provide a single place for the storage and retrieval of solution architecture artifacts. EAR provides the capability to link architectural assets to components of the Detailed Design, Deployment, and Service Management Repositories. Artifacts belonging to this repository are created by using tools, and some are custom developed. The important part of the EA repository is the architecture landscape that represents assets in use or planned by the enterprise at particular points in time.

In this article, I would like to discuss some of the factors that an organization should consider when implementing an EA repository.

Implementing an EA Repository: The Organization’s Current State

Implementing an EA Repository basically focuses on the improvement of enterprise architecture management, the management process, and the capability to effectively structure and document the current state is a key to implementing EA. Evaluation of an organization’s current state is required to understand the impact of changes to business operations. To achieve the highest level of maturity, the organization should consider documenting important current state areas that are prone to frequent changes.

Future State of the EA Repository

Developing an EA Repository future state would first be dependent on the current state of the organization and the document created on the current state. A future state document should consist of levels of detail and how an organization will achieve the future state. An Enterprise Architect should conduct a gap analysis between the current and future states, and identify initiatives to close the gap.

Read: How TypeScript Will Reshape the Enterprise Developer

Why Set Up an Enterprise Repository?

To establish a sustainable architecture practice within an organization, you should set up an EA Repository. An EA Repository setup will help an organization achieve other capabilities, such as a business process management capability within an organization. The EA Repository is a single location for all architectural artifacts and definitions within the organization. It promotes a collaborative- and communications-rich environment and helps to set standards throughout the organization. A properly set up EA Repository takes less time to investigate the current state and less time to create the future state.

Organization Strategy and Vision

The Architecture Vision, in context with the Enterprise Architecture Repository, is essentially the “elevator pitch” of an architect. Organization vision is the key opportunity to sell the benefits of an EA repository to the decision-makers of the enterprise.

The goal is to develop an Architecture Vision that includes the goals of the business, responds to the strategic drivers, conforms with the principles, and addresses the concerns and objectives of the stakeholders.

To start building the EA vision, TOGAF suggests that you have a Request for Architecture Work already established. In the vision stage, TOGAF also specifies that you should formulate the Business Strategy, Business Principles, Business Goals, and Business Drivers of your organization.

Read: Exploring the Microservices Architecture

What are the Challenges of Enterprise Architecture?

Enterprise Architecture helps to identify challenges and opportunities and revises the organization’s strategic plan and vision. Sizing the EA repository is a challenge. Inadequate size estimation underlies most of the remaining challenges.

Consistent definition and understanding of EA as a discipline adds to challenges. Most organizations select EA to fix an ongoing organizational problem without giving it a long-term goal.

An enterprise Architect requires broad knowledge from many aspects: business domains knowledge, technologies project management experience, and organizational skills. There are many channels to mature as an Enterprise Architect. Enterprise Architects with different maturing paths may see the same organization with very different challenges.

Most often, an EA attempts to take ownership of a business process and ends up getting blamed.

Read more articles about software architecture and design.

What are the Benefits of an Enterprise Repository

Lack of an EA repository can result in inconsistent diagrams, duplication of content and effort, and inaccuracy due to lack of collaboration. An EA repository provides a shared environment for architecture teams to produce diagrams and documentation. Users can benefit from best practices to facilitate consistency and communication. With an EA repository, architects are able to use their preferred tools, saving time and effort. Customers that implemented an EA repository using TOGAF are ensured of a design and a procurement specification that will greatly facilitate open systems implementation with the benefits of open systems to accrue inside organizations.

Organization Maturity and Readiness

An Organization Capability Maturity Model addresses the problem of connecting EA assets by providing an effective and proven method to gradually gain control over and improve its IT-related development processes. Maturity models describe the practices that an organization must perform to improve its EA processes. It provides a periodic measure for improvement and constitutes an industry-standard, proven framework within which to manage the improvement efforts.

As per TOGAF, Business Transformation Readiness Assessment is a technique to assess the readiness of the organization to accept change, identify issues, and deal with issues during the implementation of EA.

Conclusion

I hope this article is beneficial to you. For more information about the TOGAF Architecture Repository, read through its specifications.

Read: Transitioning to Microservices Architectures – Are You Ready?

The post Considerations for Setting Up an Enterprise Architecture appeared first on CodeGuru.

]]>
Tips to Optimize Website Performance https://www.codeguru.com/research/tips-to-optimize-website-performance/ Mon, 24 May 2021 12:00:00 +0000 https://www.codeguru.com/uncategorized/tips-to-optimize-website-performance/ Why must developers optimize website performance? Because first impressions are everything, and that’s especially true when it comes to a site. Not only does it have to look good, but it also has to perform well to keep your visitors from bouncing to a competitor. Here are some website performance optimization tips you can implement […]

The post Tips to Optimize Website Performance appeared first on CodeGuru.

]]>
Why must developers optimize website performance? Because first impressions are everything, and that’s especially true when it comes to a site. Not only does it have to look good, but it also has to perform well to keep your visitors from bouncing to a competitor. Here are some website performance optimization tips you can implement to make a solid first impression and keep customers coming back for more.

Have you ever visited a website looking to be informed, entertained, or to make a purchase, only to have it load slowly? If so, it probably caused frustration. And it may have even caused a bit of distrust as you wondered if your browsing experience was secure.

If the load speed was slow enough, you might have gone back to the drawing board and looked for an alternative source. And if you visited that alternative and liked what you saw, that original website probably lost a loyal visitor or customer for good.

What Is Website Performance Optimization?

When you’re a web developer or a website owner, you want to avoid the above scenario at all costs. You can do so via website performance optimization, which improves a website’s load speeds. The better the load speeds and the faster the browsing experience, the higher your visitor engagement and conversion rates.

To be a bit more specific, optimizing website performance has the end goal of speeding up the time it takes to display a website page fully so it’s 100 percent functional. When optimizing, you want the total page load time to be as short as possible. This time refers to how long it takes for both the front-end and server-side to load entirely and generate a page. As such, by optimizing your front-end and server-side components, you can improve your website’s performance to provide a positive user experience. Use the following tips to achieve that goal.

Ways to Optimize Website Performance

Use Tools to Test Your Site

One of the easiest ways to optimize the performance of your website is to use tools. Doing so can help you discover your site’s load speeds. More importantly, these tools can help you pinpoint any possible bottlenecks that are causing the site to load slower than it should.

To make the most out of your optimization efforts, use the following tools before, and after optimizing so you can measure any improvements:

While there are many more website optimization tools on the market, those three should get you going in your quest for better performance.

Minimize HTTP Requests

If you want to avoid slow load times, limiting the number of HTTP requests is an excellent place to start. In doing so, you’ll reduce how much data the browser needs to fetch, which can improve load speeds.

Is your site loaded with third-party plugins and unnecessary redirects? And is it also inundated with CSS and JavaScript files? Limit all of the above, and you should see your website become a lot quicker.

Fix 404 Errors

When a page no longer exists or cannot be found, your visitor will see a 404 error message. Such a message is undesirable because the visitor cannot see what they’re looking for, but it can also bog down your server and keep it from performing other tasks that can optimize your speeds. Get rid of any 404 errors, and you should get a nice boost in performance.

Be Minimalistic With Web Fonts

Can having a wide variety of fancy-looking fonts catch your visitors’ eyes when browsing your site? Of course, but at what cost?

The more web fonts you have, the higher your HTTP requests. As mentioned, you want to minimize HTTP requests, so your browser doesn’t need to fetch so much data. By taking the minimalistic approach with fonts, you can keep those HTTP requests under control, see faster load speeds, and have happier customers.

Use a Content Delivery Network (CDN)

A great way to decrease website latency and optimize performance is to use a Content Delivery Network or CDN. What is website latency? It’s how much time it takes for a request to go from the sender to the receiver, plus the time needed to process that request.

Since you may have visitors from all around the world, using a CDN with global servers at its disposal can deliver content more quickly, no matter where they are.

Optimize Images

Can big, beautiful images wow your visitors via eye-catching graphics? Absolutely, but if those images are oversized and not optimized, they will make your website as slow as molasses. Have a look at the images on your website. If they are too big and not correctly sized, compress them with a tool like TinyPNG.

Minify to Remove Unnecessary Characters From Your Code

When you minify or compress your code to make it as small as possible, you can reduce its size by 10-95 percent. The more lightweight your code, the faster your site will run. As a bonus, minification can also help you enjoy a higher SEO score so that visitors can find your website in the first place.

If your code has deadweight that’s unnecessary for loading pages, such as new lines, comments, and white space, get rid of it.

The post Tips to Optimize Website Performance appeared first on CodeGuru.

]]>
Old Programming Languages https://www.codeguru.com/research/old-programming-languages/ Fri, 01 Feb 2019 08:15:00 +0000 https://www.codeguru.com/uncategorized/old-programming-languages/ Introduction I’ve been wanting to write this article for a very long time. Older languages are still popular and being used on a daily basis for applications big and small. I am a VB6 guy, always been, and probably always will be. I still use it on a regular basis. I remember when .NET launched. […]

The post Old Programming Languages appeared first on CodeGuru.

]]>
Introduction

I’ve been wanting to write this article for a very long time. Older languages are still popular and being used on a daily basis for applications big and small. I am a VB6 guy, always been, and probably always will be. I still use it on a regular basis.

I remember when .NET launched. I was so excited to get started, little did I know that Visual Basic.NET would work differently. Not one hundred percent different, but suddenly OOP was a priority and a necessity.

Don’t get me wrong, I knew OOP very well, especially with C++ and Visual C++ 6, but my true love was Visual Basic 6. In my experience, it was just quicker and simpler to do everything I needed to.

Today, I would like to talk about older programming languages in general and why people still use them.

Reason 1: Necessity

The cold, hard fact is that there are thousands of companies still using older technologies. Why? Monetary reasons, I suppose, or the infrastructure of the country or city they live in. This causes many programmers to program on old computers and using old programming languages.

Reason 2: Company Policies

No, this is not an article for Ripley’s Believe it or Not; this is a fact. Some companies are run by the older generation. Older people (I don’t mean to generalize or stereotype here) tend to stick with stuff that works. This includes older mobile technologies, older cars, and older PCs with older operating systems. This makes writing some programs quite difficult because you have to not only cater to newer technologies, but especially concentrate on older tech.

Reason 3: Age of the Programmer

Older programmers (for the most part) simply do not see the necessity in switching to newer and more modern languages. They know what they can do with their brains and their language of choice. Getting them to switch to newer programming languages might prove difficult.

Reason 4: Technology

Yes, I have touched on this earlier, but I am now talking about technology as a whole. If your country or company doesn’t have the proper infrastructure, you will battle to join the masses in adopting newer technologies and programming languages. Yes, you might be able to write a mobile application, or a Web site, but without the latest frameworks and languages, your Web sites, applications, and mobile apps will fall behind.

Reason 5: Personal Choice

Nobody is forcing you to adapt. It is still your choice, but the fact of the matter is that you have to adapt to change, or stay behind. Apologies if the previous sentence sounded a bit harsh. I was also a bit skeptical to change. As I have been saying: I have always been a Visual Basic guy, but I work as a C# developer and couldn’t be happier.

Reason 6: Stubbornness

The title says it all.

Reason 7: Functionality

This can turn into a debate quite rapidly, especially on programming forums. Everybody wants to prove why their language of choice (old or new) is [still] the best. Yes, I get it. With newer technology comes more power and versatility. With a newer version of each language comes more features. This is the nice thing about them.

You have to remember that the .NET Framework is basically twenty years old. In these past two decades, humanity has made giant strides forward. When mobile phones entered the market, they were bulky and couldn’t do much. Now you have smart phones, smart TVs, and automated cars. Artificial Intelligence has also taken huge strides forward.

Reason 8: Neglect

In this situation, programmers want to learn new languages, or make the switch, but due to their employers not enabling them to do that, they simply cannot. Yes, you may say that the programmer can always get another job at a different employer, but now the programmer has to learn the newest languages first, before he or she can enter the job market again.

Reason 9: Amount of Work

Some programmers feel they are in their comfort zones. They simply concentrate on developing Windows-based applications. They do not want to develop Web sites or mobile applications because that becomes more work, more stuff to learn and know, and ultimately more work to do.

Reason 10: Time

I cannot believe I have ten reasons! I initially just wanted to add five…

There never seems to be time to learn newer languages. When you have kids and a wife and dogs and a house to take care off, it can become difficult juggling studies with work at home and at your place of work. Whenever there is time, there is always something else to do: renovations, holidays, fixing cars, taking your dogs to the vet, or personal injuries.

Unfortunately, that is life. But, that is not an excuse. I worked long hours, then, I came home and studied and worked. Yes, I was single back then, but fast forward many years. I still do, although not as often. I still write exams, I still try to learn new things. It is a balance. Now you might say I do not spend enough time with my family. Well, I do, because I do all my studies and most of my work when they are asleep.

Conclusion

Old languages are still quite useful. The trick is to find the balance between the old and new. I hope this article has provided some insight into the life and times of older-generation programmers.

The post Old Programming Languages appeared first on CodeGuru.

]]>
Introduction to Domain-driven Design https://www.codeguru.com/research/introduction-to-domain-driven-design/ Mon, 19 Feb 2018 08:15:00 +0000 https://www.codeguru.com/uncategorized/introduction-to-domain-driven-design/ Introduction: What is Domain-driven Design? Domain-driven design (DDD) focuses more on the business needs compared to technology. It’s all about understanding the customer’s real business needs. Domain-driven design consists of a set of patterns for building enterprise applications from the domain model out. During software development, a DDD approach is used to solve complex implementations […]

The post Introduction to Domain-driven Design appeared first on CodeGuru.

]]>
Introduction: What is Domain-driven Design?

Domain-driven design (DDD) focuses more on the business needs compared to technology. It’s all about understanding the customer’s real business needs. Domain-driven design consists of a set of patterns for building enterprise applications from the domain model out. During software development, a DDD approach is used to solve complex implementations problems. DDD puts the emphasis squarely on the domain model; the main focus is creating a conceptual model that forms a common language for both the users and programmers.

Advantages of Domain-driven Design

Your code will be very flexible, easier to understand, and quickly changeable and extendable by following DDD. Following are the advantages of using Domain Driven Design:

  • DDD provides the principles and patterns to solve difficult problems of software applications as well as business problems. These patterns have been successfully used to solve complex problems.
  • DDD is already proven; it has a history of success with complex project implementations. It aligns very well with the experience of developers and successful software applications already built.
  • It helps us write clear and testable code that represents the domain.
  • DDD helps us better understand client requirements. By following Domain-driven Design, the output code is generally much more close to the customer’s vision and perspective of problem solving. The resultant code is also easier to write and read. It is much better organized and it should have fewer issues and should be easier to test.

Concepts and Elements of Domain-driven Design

Context

Bounded Contexts are portions of the solutions and placed is a central pattern in Domain-driven Design. It is the strategic design section dealing with large models and teams. Several DDD patterns explore alternative relationships between contexts.

Context Map

A Context Map is the global view of the application. Each Bounded Context fits within the Context Map to show how they should communicate with each other and how data should be shared. A Context Map is the integration of all the domain models in the systems. Each model might have been developed independent of each other. Over time, proper integration needs to be done to make the system to work from end to end.

Domain Service

Domain Services contain operations, actions, or business process and provide the functionality that the domain needs. It deals with all the domain-related manipulation.

Application Services

Application Services are the services used by the outside world which may have representations of data. A example of an application service is a database CRUD operation.

Infrastructure Services

An Infrastructure Service is service that communicates directly with external resource. For example, accessing file system, registry, SMTP, database, and so forth in the application.

Model

The Model usually represents an aspect of reality or something of interest. It’s an abstraction that describes the selected aspects of a domain. It’s often used to solve problems that are related to that particular domain.

The Model is also a simplification of the bigger picture and important aspects of the solution are concentrated on it. This means your Model should be focused knowledge around a specific problem that is simplified and structured to provide a solution.

Domain Expert

A Domain Expert is a person who is an authority in a particular area and an owner of a topic. By using DDD, we build around the concepts of the domain and around that domain experts will be advising. A Domain Expert will be able to understanding the requirement.

Entities

An Entity is an object that can be identified uniquely or by its identifier. An Entity can be identified either by its IDs or a combination of some attributes. An entity is an identity.

Value Objects

A Value Object is an object that contains attributes but has no conceptual identity. It is a descriptor or property which is important in the domain you are modeling.

Ubiquitous Language

Ubiquitous Language is the practice for building up a communication language between developers and users. It helps developers and the business share a common language platform that both parties understand to mean the same things. Ubiquitous Language should evolve as the team’s understanding of the domain grows.

Repository

A Repository mediates between the domain and data mapping using a collection-like interface for accessing domain objects. It is more like a façade to your data store that pretends to be a collection of your domain. A Repository provides a centralized façade, storing data in a database, XML, SOAP, REST, and so on.

Persistence Ignorance

The principle of Persistence Ignorance (PI) is a property of your domain model or business model. The model is persistence ignorant because it retrieves instances of the entities it contains through abstractions.

Aggregates

Aggregates are a collection of objects to calm the complexity by reducing the connected objects into a single unit that are bound together by a root entity. It’s a cluster of associated objects that we treat as a single unit.

Conclusion

I hope that, after reading this article, you have got the basic concepts and terminologies of Domain-driven Design. DDD turns around the concepts of object-oriented design; developing applications with DDD is a big challenge. Happy reading!

The post Introduction to Domain-driven Design appeared first on CodeGuru.

]]>
Using Command Query Responsibility Segregation (CQRS) https://www.codeguru.com/research/using-command-query-responsibility-segregation-cqrs/ Wed, 13 Dec 2017 08:15:00 +0000 https://www.codeguru.com/uncategorized/using-command-query-responsibility-segregation-cqrs/ Introduction Before we dive in, let’s investigate classic architecture. It has four layers. Each layer has its own responsibility, such as a domain always handles invariants and business rules of the project, and the service layer is responsible for basic validation and a communication point to the domain layer. When your system grows, you need […]

The post Using Command Query Responsibility Segregation (CQRS) appeared first on CodeGuru.

]]>
Introduction

Before we dive in, let’s investigate classic architecture. It has four layers. Each layer has its own responsibility, such as a domain always handles invariants and business rules of the project, and the service layer is responsible for basic validation and a communication point to the domain layer. When your system grows, you need to make some changes to adjust. Often, multiple users want to browse or modify the same set of data. In such an environment, the returned data to the user may not be same as the data in the database. It may be modified and the user performs some action on old data.

The reasons for this kind of inconsistency is having the one database and caching on that database with consideration to improve performance. I mean Stale data. But, this kind of changes is not effective enough. This is because the core is still the same and not suitable for your newly grown application and it is not scalable enough.

I prefer the Domain-Driven Design approach for the projects that are grown and there is no place for them in classical architecture. But, in Domain-Driven design approach, the architecture of some bounded contexts, there are still bottlenecks that will reduce your read/write operations. For example, the model is located at heart of the application and it’s not unexpected to have complicated aggregates and sophisticated business rules.

If you try to query this kind of model, you may add some code for querying purposes, index searching, and other actions related to reporting issues, because reporting typically is used more. It may compromise your domain model because you should have some strategy to facilitate a reading operation. Such merging aggregates and writes reading code in the repository that is not suitable for reading and brings too much overhead. You can avoid these issues by applying CQRS architecture and freeing the model from any presentational requirements.

What Is CQRS?

Asking a question should not change the answer.

-Bertrand Meyer

CQRS originated with Bertrand Meyer’s CQS. The difference is that, in CQRS, you have a sperate object for both command and query. CQRS is fundamentally about denormalization.

CQRS stands for Command Query Responsibility Segregation. It was introduced by Greg Young. It postulates that every method should either be a Command that performs an action or a Query that returns data. A command cannot return data and a query cannot change the data. Each model can be optimized for its specific context, and it also retains its conceptual integrity.

In CQRS, we have a command/handler pattern for commands (writing purpose). The user sends a command and the desired handler is responsible for coordinating state changes. What if you ask about the necessity of the command/handler pattern? The command pattern will let your project to be more scalable (I will describe further) and prevent anemia. Also, the most important thing is making sure the invariants of our business rules are always consistent. For example, if we define invariants of our domain model out of it, we have to define them separately each time and there is no guaranteed consistency.

There are three kinds of CQRS: Standard, Event-sourcing, and Eventual consistency. I will describe each of them further. You can have one database and two sperate models. It’s a good idea; the result is immediate and synchronize. However, for other types of applications, it might be desirable to use two different data stores.

Query

You may ask why we shouldn’t use the same pattern for reading and writing to deal with code reuse purposes. Udi Dahan wrote an article about the fallacy of reuse. He believes:

Reuse may make sense in the most tightly coupled pieces of code you have, but not very much anywhere else.

On the query side, there are just DTO objects and no Domain Model, repository, or other common Domain-Driven Design implementations. Feel free to generate views from your model directly. The query side is responsible for retrieving data, data which is purified and suitable for reporting purposes. The query side has no need to pave a domain model and bottlenecks; we can de-normalize data and optimise it in a shape that the consumer wants. We can have a separate database for each action and synchronize them.

For implementing on the Query side, you may go directly to the database and do some querying over there. That’s works, especially when performance and efficiency are a big deal. In this case, I do recommend a light SQL library instead of ORMs. ORMs are anti-patterns and are not good for reading purposes. Someone may ask, without ORM there would be a query duplication challenge? Yes, one of the compliments about the direct query is DRY; you may have to do some domain model actions (calculations and …) again. Duplication of domain logic is not good and it may cause failure of the system.

The other option is using current domain objects. This option is good for quick reporting. You extend reporting objects from existing domain objects. The performance will decrease significantly, but the speed of development increases. At first sight, a simple mapping between domain object and reporting is desired. Unfortunately, the benefits do come with a cost. This will lead you to big performance trouble, and the developer is dealing with a problem of Impedance Mismatch.

Database tables and class objects are orthogonal. In the database, there is no way to define invariants and business rules. You may ask about using if statements in queries or other conditional statements, but they are far their object equivalence, and make your object anemic. Also, the relational model does not support any sort of polymorphism or IS-A kind of relation, so developers eventually find themselves adopting one of three possible options to map inheritance into the relational world.

Command

A command is a business use case. You should write the command in the language of the business use case of the system, not in ubiquitous language. The command is not same as the concept of the domain model. Exactly, in fact, the command is business use case driven. In general, the command is a DTO with a simple validation. Now, you may ask about how we can differ business use case language from the ubiquitous language. This issue has been already solved by the application service layer.

Let’s deep dive in and investigate. The application service layer (as you might already read my previous articles about Domain-driven design) sits above the domain layer and is responsible for interpreting business use case language to domain language and coordinate all actions. In my experiences in enterprise application development, almost all junior developers confuse domain logic (aka business logic, business rules, and domain knowledge) and application logic. This confusion will compromise your system. Let’s find out a way to make sure your choice is correct.

One of the big signs of application service layer logic is its responsibility to handle infrastructural concerns. Also, as I mentioned earlier, it’s responsible to coordinate business use cases with the domain logic of the application. Distinguishing infrastructural concerns is not a big deal and you may find it easy.

What about the second duty of application service. It’s harder, isn’t it? Application service logic always delegates to the domain logic and it should not make business-critical decisions. One of the techniques in finding domain model business rules is asking a question about the use case. For example, Is this mandatory? or Are these steps inextricable? These steps must come with each other always, and if you found a way to recombine, potentially it’s not a domain concept.

After making sure our domain model is completely isolated, it’s time to know the patterns which help ensure our application service has no undesirable coupling. I list the patterns next:

  • Command Processor pattern: You have a command and a processor for each use case.
  • Publish/Subscribe pattern: A pattern for looser coupling.
  • Request/Reply pattern: Follows the One Model In-One Model Out approach.
  • Async/Await pattern: A pattern for building asynchronous, non-blocking applications.

The commands are an intent and can be rejected. They always execute against the aggregate root; with aggregate root, you can force your invariants and make sure everything is right. In fact, aggregates make the decision to accept or reject commands. The system might have lots of commands and the aggregate only stores (writes) the commands which are accepted and switch to a new state.

CQRS and Event Sourcing

Event Sourcing is not necessary for CQRS. You can combine Event Sourcing and CQRS. This kind of combination can lead us to a new type of CQRS. It involves modeling the state changes made by applications as an immutable sequence or log of events. You may think about logging in your system and event logging, but, to be honest, event logging is not event sourcing. Event sourcing forces the current state to be derived from history. If you cannot reason about your current system from history, you are not doing event sourcing. Indeed, events are business facts.

In Domain-Driven design, events must follow Ubiquitous language and all the events in your system must be in the past and named in past tense. Events are independent—events should have enough data to describe themselves.

Event sourcing is rising in popularity because it makes troubleshooting easier and has better performance characteristics; writes and reads can be scaled independently. References to GRASP event souring enable loosely coupled application architecture. Also, it enables you to add more applications in the future that need to process the same event but create a different materialized view.

Events sourcing acts as a time machine. This is known as a Temporal Query. Events will keep data in your system and let you describe and interpret it on your own. You may have multiple ways to do a task. You cannot delete your events because they are immutable, even when you have incorrect data in your event store. Exactly, in fact, events are facts and cannot be deleted. There are some downsides, such as a higher learning curve and unfamiliar programming model.

An aggregate that accumulates an unbounded number of events over time is a smell. For example, say you have hundreds of events in your event store and you need to reply to events to get the desired result. If you reply to all events, you are faced with performance problems and you may not need events to be replied. A snapshot is a bookmark in a series of your events; it acts as the starting point in event replies. Exactly, in fact, the snapshot is same as the Memento pattern. It is only a technical definition for optimization and in the conceptual model of event-souring has no meaning or full definition.

It’s explicit that every client needs to read and display information and usually is interested in the current state. Event store alone is not able to represent current state. It needs some help to query; let’s call it a query handler. A query handler builds up a projection. A projection effectively is a current state representation that can be sourced from the events.

A query handler uses projection instead of the event source. You can have many projections that are created and optimised for a specific query. Projections are built by projectors, which are processes of event streams. Of course, they are not using the observer pattern. Projectors know where the event streams are; this will let them find the pointer.

You can rebuild a projection by resetting the pointer to zero. Projections and projectors are agnostic. Projection can use denormalized data and can be defined by each new interpretation. They can switch between projectors easily; this means there is no need for database migration anymore.

If a command is responsible for writing and a query is responsible for reading, Reactors (reactions) are for all of the business logic between them. Reactors process an event stream as projections do, but they do not maintain as projections for querying purposes. They also have a pointer to show where there are. We can say projectors are kind of reactors, but instead of creating a projection, they react to events, either through trigger external behavior or by omitting new events back to event store, or both.

Reactors read event streams and subscribe to them. They are responsible for triggering business rules of events. Reactors keep referenced data, which sometimes call internal projections and watch the stream of events and react when the condition is met. They execute a side effect from something that happened and then emit an event to that event.

Reactors are orthogonal. They are asynchronous and are subjects for eventual consistency. Reactors are encapsulated—they are completely isolated and self-sufficient. Sagas are built by a series of reactors.

Eventual Consistency

Let’s talk about the other type of CQRS, the Eventual Consistency. Some systems are tightly coupled, as in a consistent Web application that is is talking to a database directly. This kind of system id not scalable enough. Indeed, the faster you go, the less scalable you will be. To have scalable systems, we need to have a loosely coupled system that, instead of talking directly to the database, will put data in the queue and later will stick it on the database. Then, you should guarantee the user request is registered in the queue and that the queued data will be stored in the database. It’s actually eventual because everything that happened will store with a latency, even with a millisecond.

Let’s see another example. Imagine in Instagram you have a picture; when everyone likes the picture, you send a post request to the server. The server store likes in a NoSQL database and, in a certain period, such every 30 seconds, will process data and store it in a relational database. It’s a clear example of Eventual Consistency. The client will not impact on the database directly and the system is allowed to make a decision about how and when to process the requests.

One of the important aspects of Eventual Consistency is about user experience. As a user, when you book a hotel room, some systems will not approve the booking immediately; it just gets your request and then notifies you via e-mail (ask the hotel if they have enough rooms). In this situation, the user can cancel request without loss of money (cancellation policy of hotel). Because of this, the request is eventual and the user may have a chance to cancel before the request is sent to the hotel. I do recommend Udi Dahan’s Race Conditions Don’t Exist article.

Conclusion

CQRS has many advantages to some of the most difficult problems when building complex and large-scale applications. However, this type of architecture is specially tuned to solve a very specific subset of problems.

The post Using Command Query Responsibility Segregation (CQRS) appeared first on CodeGuru.

]]>
Making DevOps Work—What It Takes to Create a Successful DevOps Experience! https://www.codeguru.com/research/making-devops-work-what-it-takes-to-create-a-successful-devops-experience/ Fri, 27 Jan 2017 08:15:00 +0000 https://www.codeguru.com/uncategorized/making-devops-work-what-it-takes-to-create-a-successful-devops-experience/ By Karthiga Sadasivan Over the past few years, so much has been written about DevOps that any new content on the subject is greeted with a certain amount of skepticism. Also, with the IT industry’s extreme proclivity for catchy terms and acronyms, this skepticism within and outside the industry is quite understandable. So what new […]

The post Making DevOps Work—What It Takes to Create a Successful DevOps Experience! appeared first on CodeGuru.

]]>
By Karthiga Sadasivan

Over the past few years, so much has been written about DevOps that any new content on the subject is greeted with a certain amount of skepticism. Also, with the IT industry’s extreme proclivity for catchy terms and acronyms, this skepticism within and outside the industry is quite understandable.

So what new perspective am I looking to provide?

Firstly, the intent is to dispel the notion that DevOps is a software development methodology or a combination of tools and applications. It is, in fact, not a concept related to inanimate software; it’s a human behavioral trait that establishes the primacy of people over processes and processes over tools. Adoption of DevOps does not entail transformation of processes; it is a transformation of the organizational culture. It mandates the dissolution of silos in the organizational structure, so that each functional group is accountable for every stage of the software delivery.

What is the best way to optimally utilize DevOps?

DevOps as a Practice, not an Experiment

Just like the adoption of agile development methodology, organizations have to embrace DevOps as a strategy, not as a fancy approach aimed to promote a forward-thinking approach. DevOps is no longer an innovative approach; most organizations have already adopted it and the ones that are leveraging its principles to innovate and extend its capabilities are the ones deriving optimum benefit from it.

The 7Cs—A step Ahead of Agile Delivery Models

Deadlines are becoming exceedingly stringent, and adoption of the agile methodology no longer can be restricted to the development stage; it has to span the entire software delivery lifecycle. DevOps, or a collaboration of Development and Operations, is intended to ensure just that—agile delivery by embracing the 7Cs of DevOps:

  1. Continuous Planning
  2. Continuous Development
  3. Continuous Integration
  4. Continuous Deployments
  5. Continuous Testing
  6. Continuous Monitoring
  7. Continuous Feedback

The 7Cs Approach to Continuous Delivery
Figure 1: The 7Cs Approach to Continuous Delivery

Free and Seamless Information Sharing

In terms of cultural attributes, for an organization to effectively adopt DevOps and gain optimum value from it, the functional groups should break open individual silos and cross-functional seamless information sharing should be encouraged. Over and above information, each functional group also should share accountability for all stages of software delivery.

Utilization of Disruptive Technologies

Innovations in technology, such as cloud-based services, can have a significant impact on implementation of the DevOps methodology. In the past, organizations had to predict server space needed and invest on dedicated servers that could often lie unutilized. Now, continuous deployment enabled by DevOps allows developers to deploy code on servers as and when needed, made possible by the availability of cloud-based, pay-per-use servers.

Learning from Experience

Though the concept of DevOps has been floating around for quite a few years now, it is only recently that organizations have started optimally leveraging its benefits. One of the most successful proponents of DevOps has been Netflix, an organization whose business relies entirely on quality and consistency of service and DevOps delivers just that—consistent and efficient delivery of quality service leveraging process automation. Netflix engineers have taken automation to a new level by automating failure, using a script called ‘Chaos Monkey’ that randomly shuts down server instances, allows developers to experience outages first-hand and incentivizes them to build fault-tolerant systems. Now, Netflix developers identify and resolve vulnerabilities before they can impact customers, even while deploying code thousands of times per day.

Building an Organizational Culture for DevOps to Succeed

We’ve already established that DevOps is more of a people-oriented cultural transformation than a mere change in processes. If the leadership retains the belief that DevOps is something only the developers and coders need to think about, an organization-wide change is impossible. DevOps relies on breaking down cross-functional organizational silos. To do that, the senior leadership’s role is extremely critical.

Interestingly, DevOps is a concept that encourages proactive communication across an organization. At the same time, convincing rigid organizations to adopt a model like DevOps requires a structure that allows seamless communication across functional groups and hierarchies. This makes it doubly difficult to implement, because it necessitates a holistic cultural transformation, even before it can be put into practice. However, as numerous success stories have depicted, the initial few steps face the most formidable roadblocks. Once the framework is in place, the impact will be tangible enough for the entire organization to conform to it.

Adopting DevOps During Good Times, and not as a Crisis Management Tool

One cardinal error that many organizations tend to commit is to adopt a transformational approach when the organization is in the midst of a crisis. It needs to be understood that DevOps is not a panacea, nor is it a fire-fighting tool. It is, instead, an approach aimed at improving software delivery efficiency through better collaboration and an ability to view the larger picture and envisioning long-term benefits. Consequently, the best time to adopt an approach like DevOps is when the organization is in equilibrium. At such a juncture, there will be less cynicism and reasonable expectations, and nobody will expect an overnight turnaround.

In conclusion, a favorable work culture and clear understanding of technology and its impact are the two key drivers governing the success of DevOps in any organization. And the possibilities, as shown by the Amazons and Netflixes of the world, are endless. The time is now for Development and Operations to come together and work in tandem to ensure high quality software releases that are primed to meet business needs.

About the Author

Karthiga Sadasivan leads the DevOps Practice at Happiest Minds Technologies. She has 16 years of global IT industry experience with expertise in Engineering Services, DevOps, Agile Engineering, and Continuous Delivery. Karthiga has a keen interest in strategic planning and execution, building a sustainable DevOps culture, setting up new capabilities, and driving business growth. She holds a Master’s degree in Business Administration with a Bachelor’s degree in Electronics & Communication Engineering.

*** This article was contributed to Codeguru. All Rights Reserved ***

The post Making DevOps Work—What It Takes to Create a Successful DevOps Experience! appeared first on CodeGuru.

]]>
Contact Display Switch Animation: The Transition from List View to Grid View https://www.codeguru.com/research/contact-display-switch-animation-the-transition-from-list-view-to-grid-view/ Wed, 27 Apr 2016 07:15:00 +0000 https://www.codeguru.com/uncategorized/contact-display-switch-animation-the-transition-from-list-view-to-grid-view/ By Sergii Ganushchak and Roman Sherbakov Every time we design screens that feature friend lists or contact lists, we face the problem of choosing between list view and grid view. Although list view usually provides more details about each user or contact, grid view allows more users or contacts to appear on the screen at […]

The post Contact Display Switch Animation: The Transition from List View to Grid View appeared first on CodeGuru.

]]>
By Sergii Ganushchak and Roman Sherbakov

Every time we design screens that feature friend lists or contact lists, we face the problem of choosing between list view and grid view. Although list view usually provides more details about each user or contact, grid view allows more users or contacts to appear on the screen at the same time.

Sometimes, you can’t say for sure which variant is best for a particular use case. That’s why we designed a UI that allows users to switch between list and grid views on the fly and choose the most convenient display type for themselves.

Contact1
Figure 1: Two ways to view an UI on a mobile device

In addition to usernames and profile pictures, list view also provides information about posts, comments, and likes. A list view can include any information you need while browsing your friends list.

Grid view displays only profile pictures and usernames. This lets us fit more profiles on one screen. Grid view is useful when you’re looking for a specific user and don’t need any additional information.

We created design mockups for both list and grid views using Sketch. As soon as the mockups were ready, I used Principle to create a smooth transition between the two display types.

Contact Display Switch Animation Use Cases

You can use our Contact Display Switch for:

  • Social networking apps
  • Dating apps
  • Email clients
  • Any other app that features list of friends or contacts

Furthermore, the DisplaySwitcher component that we created based on the idea of Contact Display Switch animation is not limited to friends lists and contact lists; it can work with any other content. It’s up to your imagination!

Developing a DisplaySwitcher Component

First, we’ll tell you how you can use our DisplaySwitcher component in your own iOS project. Then, we’ll look under the hood and see how the animated transition between two collection view layouts works.

How to Use It

To begin, you need to create two layouts—one for displaying a list and another for displaying a grid:

   private lazy var listLayout = BaseLayout(staticCellHeight:
      listLayoutStaticCellHeight, nextLayoutStaticCellHeight:
      gridLayoutStaticCellHeight, layoutState: .ListLayoutState)
   private lazy var gridLayout = BaseLayout(staticCellHeight:
      gridLayoutStaticCellHeight, nextLayoutStaticCellHeight:
      listLayoutStaticCellHeight, layoutState: .GridLayoutState)

Parameters:

  • staticCellHeight: The height of the current cell
  • nextLayoutStaticCellHeight: The height of the next layout’s cell
  • layoutState: The layout state (list or grid)

After the layouts are ready, you need to set the current layout for the collection view (in our case, that’s listLayout) and set the current layout using CollectionViewLayoutState enum:

collectionView.collectionViewLayout = listLayout
private var layoutState: CollectionViewLayoutState =
   .ListLayoutState

Next, override two required methods of the collection view datasource:

func collectionView(collectionView: UICollectionView,
   numberOfItemsInSection section: Int) -> Int
func collectionView(collectionView: UICollectionView,
   cellForItemAtIndexPath indexPath: NSIndexPath) ->
   UICollectionViewCell

And also, override one method of the collection view delegate:

   func collectionView(collectionView: UICollectionView,
         transitionLayoutForOldLayout fromLayout:
         UICollectionViewLayout, newLayout toLayout:
         UICollectionViewLayout) ->
         UICollectionViewTransitionLayout {

      let customTransitionLayout =
         TransitionLayout(currentLayout:
         fromLayout, nextLayout: toLayout)
      return customTransitionLayout
   }

At this point, return the TransitionLayout instance. This means that you are going to use a custom transition. You can find more info on this method here.

Finally, you must make layout changes for some events (like pressing a button) using the TransitionManager class instance:

      let transitionManager: TransitionManager
      if layoutState == .ListLayoutState {
         layoutState = .GridLayoutState
         transitionManager = TransitionManager(duration:
            animationDuration, collectionView: collectionView!,
            destinationLayout: gridLayout, layoutState:
            layoutState)
      } else {
         layoutState = .ListLayoutState
         transitionManager = TransitionManager(duration:
            animationDuration, collectionView: collectionView!,
            destinationLayout: listLayout, layoutState: layoutState)
      }
      transitionManager.startInteractiveTransition()

Parameters:

  • animationDuration: Time duration of the transition
  • collectionView: Current collection view
  • destinationLayout: The layout you’re switching to
  • layoutState: The state of the layout you’re switching to

That’s it! Now you know how to use our component!

Going Under the Hood

We use five classes to implement our DisplaySwitcher:

  • BaseLayout is a class that deals with building layouts and overrides the UICollectionViewLayout methods for calculations of the required contentOffset when switching from one layout to another.
  • BaseLayoutAttributes is a class for adding custom attributes.
  • TransitionLayout is a class that defines the custom attributes.
  • TransitionManager is a class that uses TransitionLayout and deals with the transition between layouts according to preset time durations.
  • RotationButton is a custom class that inherits from UIButton, and is used for a button that animates transition between the layouts.

Let’s explore these classes in more detail.

BaseLayout

In the BaseLayout class, we use methods for building list and grid layouts. But, what’s most interesting here is the contentOffset calculation that should be defined after the transition to a new layout.

First, save the contentOffset of the layout you are switching from:

   override func prepareForTransitionFromLayout(oldLayout:
         UICollectionViewLayout) {
      previousContentOffset = NSValue(CGPoint:
         collectionView!.contentOffset)
      return super.prepareForTransitionFromLayout(oldLayout)
   }

Then, calculate the contentOffset for the new layout in the targetContentOffsetForProposedContentOffset method:

   override func targetContentOffsetForProposedContentOffset
         (proposedContentOffset: CGPoint) -> CGPoint {
      let previousContentOffsetPoint =
         previousContentOffset?.CGPointValue()
      let superContentOffset =
         super.targetContentOffsetForProposedContentOffset
            (proposedContentOffset)
      if let previousContentOffsetPoint =
            previousContentOffsetPoint {
         if previousContentOffsetPoint.y == 0 {
            return previousContentOffsetPoint
      }
      if layoutState == CollectionViewLayoutState.
            ListLayoutState {
         let offsetY = ceil(previousContentOffsetPoint.y +
               (staticCellHeight *
               previousContentOffsetPoint.y /
               nextLayoutStaticCellHeight) + cellPadding)
            return CGPoint(x: superContentOffset.x, y: offsetY)
      } else {
         let realOffsetY = ceil((previousContentOffsetPoint.y /
            nextLayoutStaticCellHeight * staticCellHeight /
            CGFloat(numberOfColumns)) - cellPadding)
         let offsetY = floor(realOffsetY / staticCellHeight) *
               staticCellHeight + cellPadding
            return CGPoint(x: superContentOffset.x, y: offsetY)
      }
   }

      return superContentOffset
   }

And then, clear the value of the variable in the finalizeLayoutTransition method:

   override func finalizeLayoutTransition() {
      previousContentOffset = nil
      super.finalizeLayoutTransition()
   }

BaseLayoutAttributes

In the BaseLayoutAttributes class, a few custom attributes are added:

   var transitionProgress: CGFloat = 0.0
   var nextLayoutCellFrame = CGRectZero
   var layoutState: CollectionViewLayoutState =
      .ListLayoutState

transitionProgress is the current value of the animation transition that varies between 0 and 1. It’s needed for calculating constraints in the cell (see example on GitHub).

nextLayoutCellFrame is a property that returns the frame of the layout you switch to. It’s also used for the cell layout configuration during the process of transition.

layoutState is the current state of the layout.

TransitionLayout

The TransitionLayout class overrides two UICollectionViewLayout methods, layoutAttributesForElementsInRect and layoutAttributesForItemAtIndexPath, where we set properties values for the last BaseLayoutAttributes.

TransitionManager

The TransitionManager class uses the UICollectionView‘s startInteractiveTransitionToCollectionViewLayout method, where you point the layout it must switch to:

func startInteractiveTransition() {
   UIApplication.sharedApplication()
      .beginIgnoringInteractionEvents()
   transitionLayout =
         collectionView.startInteractiveTransitionToCollectionViewLayout
         (destinationLayout, completion: { success, finish in
      if success && finish {
         self.collectionView.reloadData()
         UIApplication.sharedApplication()
            .endIgnoringInteractionEvents()
      }
   }) as! TransitionLayout
   transitionLayout.layoutState = layoutState
   createUpdaterAndStart()
}

The CADisplayLink class is used to control animation duration. This class helps calculate the animation progress depending on the animation duration preset:

private func createUpdaterAndStart() {
   start = CACurrentMediaTime()
   updater = CADisplayLink(target: self, selector:
      Selector("updateTransitionProgress"))
   updater.frameInterval = 1
   updater.addToRunLoop(NSRunLoop.currentRunLoop(),
      forMode: NSRunLoopCommonModes)
}

dynamic func updateTransitionProgress() {
   var progress = (updater.timestamp - start) / duration
   progress = min(1, progress)
   progress = max(0, progress)
   transitionLayout.transitionProgress = CGFloat(progress)

   transitionLayout.invalidateLayout()
   if progress == finishTransitionValue {
      collectionView.finishInteractiveTransition()
      updater.invalidate()
   }
}

That’s it! Use our DisplaySwitcher in any way you like! Check it out on GitHub.

And, here’s our Contact Display Switch animation on Dribbble.

About the Authors

Sergii Ganushchak is a mobile UX/UI designer. He loves his family, his job, and his bike. You can follow Sergii on Dribbble, where he posts his latest works, and on Twitter.

Roman Sherbakov is an iOS developer at Yalantis.

The post Contact Display Switch Animation: The Transition from List View to Grid View appeared first on CodeGuru.

]]>
Hapi.js: Building Custom Handlers https://www.codeguru.com/research/hapi-js-building-custom-handlers/ Mon, 18 Apr 2016 07:15:00 +0000 https://www.codeguru.com/uncategorized/hapi-js-building-custom-handlers/ By Matt Harrison This article was excerpted from the book Hapi.js in Action. Handlers are where you declare what should actually happen when a request matches one of your routes. The basic handler is just a JavaScript function with the signature: function (request, reply) {...} There are also a number of built-in handlers that you […]

The post Hapi.js: Building Custom Handlers appeared first on CodeGuru.

]]>
By Matt Harrison

This article was excerpted from the book Hapi.js in Action.

Handlers are where you declare what should actually happen when a request matches one of your routes. The basic handler is just a JavaScript function with the signature:

function (request, reply) {...}

There are also a number of built-in handlers that you can use to define complex behaviour through configuration. An example of one of these is the directory handler for serving static content:

Listing 1: The built-in directory handler

server.route({
   method: 'GET',
   path: '/assets/{path*}',
   handler: {
      directory: {                             //#A
      path: Path.join(__dirname, 'assets')     //#A
      }                                        //#A
   }
});

#A: Behavior of the route is defined in configuration using built-in directory handler

One of the central philosophies of hapi is that configuration is favourable over code. Configuration is usually easier to write, easier to read, easier to modify and reason about than the equivalent code.

If you find yourself repeating a common set of tasks or behaviour in your handlers, you could consider extracting a new custom handler type. Without further ado, let’s see an example.

The internationalization (i18n) example

In this example, we’re building a (very) small website. The website will cater to an international audience, so we want to include support for multiple languages from the start. Internationalization, also known as i18n, isn’t a feature that’s built into hapi so you’re going to create it yourself!

In this article, you’re going to see how you can write a custom handler to wrap up the complexity of this task into a simple-to-use handler.

The website, which is in its early stages of development, currently only has one page—the homepage. We have created a Handlebars template for that:

Listing 2: templates/index.hbs

<h1>Hello!</h1>

Ok, so when I called it a website I was probably overstating things. It’s just a single line of HTML that says hello—but it has potential!

We currently have a simple skeleton hapi application to serve this view.

Listing 3: index.js: the basic website application

const Hapi = require('hapi');
const Path = require('path');

const server = new Hapi.Server();
server.connection({ port: 4000 });

server.register(require('vision'), (err) => {     //#A

   if (err) {
      throw err;
   }

   server.views({                                //#B
      engines: {                                 //#B
         hbs: require('handlebars')              //#B
      },                                         //#B
      path: Path.join(__dirname, 'templates')    //#B
   });                                           //#B

   server.route([
      {
         method: 'GET',
         path: '/',
         handler: {                              //#C
            view: 'index'                        //#C
         }                                       //#C
      }
   ]);

   server.start(() => {

      console.log('Server started!');
   });
});

#A: Load vision module
#B: Configure view engine
#C: Use the view handler to render the index template

We’ve decided to send off our Handlebars templates to translators. So we send them off to a French and a Chinese Translator. We also come up with a new naming scheme, suffixing the template name with the ISO 639-1 two letter language code. We now have three templates in total. They are named

templates/index_en.hbs     //#A
templates/index_fr.hbs     //#B
templates/index_zh.hbs     //#C

#A: English template
#B: French template
#C: Chinese template

Parsing the Accept-Language header

Our application needs to look at an incoming request and decide which language-specific template it should serve, as shown in Figure 1.

Hapi1
Figure 1: The application should determine which template to use

The Accept-Language header, when present, specifies the user’s preferred languages, each with a weighting or priority (called a “quality factor” in the HTTP spec, denoted by q). An example of an Accept-Language header is:

Accept-Language: da, en-gb;q=0.8, en;q=0.7

This can be translated into:

I would like this resource in Danish. If you don’t have Danish, I would like British English. If you don’t have British English, I will settle for any kind of English.

We can use a Node.js package, appropriately named accept-language, to help out parsing those headers into a more usable form. To see what kind of thing the accept-language module gives us back, you can run this one-liner (after npm install --save accept-language in our project) in your terminal:

node -e "console.log(require('accept-language'
   ).parse('da, en-gb;q=0.8, en;q=0.7'))"

The output should be:

[ { value: 'da', language: 'da', region: null, quality: 1 },
  { value: 'en-gb', language: 'en', region: 'gb', quality: 0.8 },
  { value: 'en', language: 'en', region: null, quality: 0.7 } ]

The array returned by AcceptLanguage.parse() is ordered by user language preference.

First implementation

We can use our language-specific templates and knowledge of the Accept-Language header to build a naive implementation of our i18n-enabled hapi-powered website.

Hapi2
Figure 2: The process we will use to find a suitable template for a request

When a request is received, we want to check if we have a matching template for any of the languages in the Accept-Language header. If there’s no header present, or there are no matching templates, we will fall back to rendering the default language template. This process is shown in figure 2.

The implementation of this for a single route is shown below:

Listing 4: index.js: I18n-enabled route serving language-specific templates

server.route([
   {
      method: 'GET',
      path: '/',
      handler: function (request, reply) {

         const supportedLanguages = ['en', 'fr', 'zh'];        //#A
         const defaultLanguage = 'en';                         //#A
         const templateBasename = 'index';

         const acceptLangHeader =                              //#B
            request.headers['accept-language'];
         const langs =
            AcceptLanguage.parse(acceptLangHeader);            //#B

         for (let i = 0; i < langs.length; ++i) {              //#C
            if (supportedLanguages.indexOf(langs[i].language)  //#C
               return reply.view(templateBasename +            //#C
               '_' + langs[i].language);
            }                                                  //#C
         }                                                     //#C

         reply.view(templateBasename + '_'                     //#D
            + defaultLanguage);
      }
   }
]);

#A: Define some settings
#B: Parse the Accept-Language header
#C: Loop through each preferred language and if the current one is supported, render the view
#D: Otherwise, render the default language’s view

You can test this out, trying different Accept-Language headers, by sending some requests with cURL:

$ curl localhost:4000/ -H "Accept-language: en"
<h2>Hello!</h2>

$ curl localhost:4000/ -H "Accept-language: zh"
<h2>你好!</h2>

$ curl localhost:4000/ -H "Accept-language: fr"
<h2>Bonjour!</h2>

$ curl localhost:4000/ -H "Accept-language: de"
<h2>Hello!</h2>

Making things simple again

Although our first implementation works for sure, it’s pretty ugly and involves a lot of boilerplate code that needs to be copied into each of our handlers for any new routes we add. Do you remember how easy it was to use basic view handler from vision? That was a simpler time; we want to get back to that:

server.route([
   {
      method: 'GET',
      path: '/',
      handler: {
         view: 'index'
      }
   }
]);

What we need to do then is to build a custom handler that can be used just like the above code sample and takes care of all the messy business behind the scenes for us. You create new custom handlers using the server.handler() method.

(API METHOD server.handler(name, method)

(http://hapijs.com/api#serverhandlername-method)

Your custom handler function will accept the route and the options given to it as parameters and should return a handler with the usual function signature.

Listing 5: index.js: creating the custom i18n-view handler

  server.handler('i18n-view', (route, options) =>
  const view = options.view;                           //#A

   return function (request, reply) {

      const settings = {                               //#B
         supportedLangs: ['en', 'fr', 'zh'],           //#B
         defaultLang: 'en'                             //#B
      };

      const langs =                                    //#C
         AcceptLanguage.parse(request.headers
         ['accept-language']);

      for (let i = 0; i < langs.length; ++i) {         //#D
         if (settings.supportedLangs.indexOf           //#D
               (langs[i].language) !== -1) {
            return reply.view(view + '_'               //#D
               + langs[i].language);
         }                                             //#D
      }                                                //#D

      reply.view(view + '_' +                          //#E
         settings.defaultLang);
   }
});

#A: View name is passed in through options
#B: Define some settings
#C: Parse the Accept-Language header
#D: Loop through each preferred language and if the current one is supported, render the view #E: Otherwise, render the default language’s view

One improvement I would like to add to this is to remove the settings object from the handler. Having these explicit values in there tightly binds the custom handler to our usage. It’s a good idea to keep configuration like this in a central location.

When creating a hapi server you can supply an app object, with any custom configuration you would like. These values are then accessible inside server.settings.app, so let’s move the i18n configuration there:

Listing 6: index.js: storing app config in server.settings.app

const server = new Hapi.Server({
   app: {                                           //#A
      i18n: {                                       //#A
         supportedLangs: ['en', 'fr', 'zh'],        //#A
         defaultLang: 'en'                          //#A
      }                                             //#A
   }                                                //#A
});

...

server.handler('i18n-view', (route, options) => {
   const view = options.view;

   return function (request, reply) {

      const settings = server.settings.app.i18n;   //#B

...

#A: Store application config when creation server
#B: Access same config later in server.settings.app

Now to use our shiny new custom handler is as simple as supplying an object with an i18n-view key and setting the template name:

Listing 7: index.js: using the custom i18n-handler handler

server.route([
   {
      method: 'GET',
      path: '/',
      handler: {
         'i18n-view': {
            view: 'index',
         }
      }
   }
]);

We can reuse this handler now throughout our codebase without any ugly boilerplate code.

 

Hapi3

Hapi.js: Building custom handlers

By Matt Harrison

This article was excerpted from the book Hapi.js in Action.

The post Hapi.js: Building Custom Handlers appeared first on CodeGuru.

]]>
New to Big Data? Start with Kafka https://www.codeguru.com/research/new-to-big-data-start-with-kafka/ Fri, 12 Feb 2016 08:15:00 +0000 https://www.codeguru.com/uncategorized/new-to-big-data-start-with-kafka/ Kafka has become a popular distributed messaging system for a big data environment, so it made good sense for me to write an article about it. In this article, we will look at the high level architecture, the components, and terminologies of Kafka, and understand the way it works. Introduction In the big data world, […]

The post New to Big Data? Start with Kafka appeared first on CodeGuru.

]]>
Kafka has become a popular distributed messaging system for a big data environment, so it made good sense for me to write an article about it.

In this article, we will look at the high level architecture, the components, and terminologies of Kafka, and understand the way it works.

Introduction

In the big data world, the data is expected to be pumped to the destination at very high frequencies and high volumes, so distributed systems such as Kafka become easy to persist the data temporarily and later consume it in batches. Following are a few of the advantages of using Kafka as a distributed messaging system.

  • High throughput
  • Easy linear scalability
  • Inbuilt partitioning
  • Easy and quick replication
  • High fault tolerance
  • A publish-subscribe model

System Architecture

The Kafka environment is a distributed environment; this means it consists of a cluster of servers. Figure 1 provides a high level architecture of Kafka.

Kafka1
Figure 1: The architecture of Kafka

Broker

Each server in the Kafka cluster is called the broker. At any point in time, the cluster can be linearly scaled by simply adding a new broker to the cluster.

Topic

The messages which get persisted in the Kafka brokers are categorized as topics. A topic can be partitioned and thus the messages get distributed across the cluster.

Producer

A producer is the one which pumps in messages to the Kafka cluster. There can be multiple producers that pump in the data simultaneously to the Kafka brokers. Each producer publishes the message to a particular topic.

Consumer

A consumer is one that subscribes to the Kafka brokers to receive the messages. The consumers will listen for messages from the particular topic.

Partition

A number of partitions can be configured at the Kafka level. Each topic can be divided into multiple partitions. The partitions get distributed across multiple brokers. Based on the replication factor, the partitions also get replicated between the clusters. Among a partition and its replications, one of them acts as a “leader” and others act as “followers.” When the leader fails, one of the followers automatically steps up to be a leader. This ensures high fault tolerance and less down time.

Figure 2 shows a diagram of a sample partition.

Kafka2
Figure 2: A sample partitioned diagram

In Figure 2, you see that there are three brokers, three partitions, and the replication factor is 3. The leader partition is marked in green and the followers are marked in brown. I have also expanded a partition to show how the messages are stored in a partition and indexed with an offset value.

Offset

Each message persisted inside a partition is assigned to a number offset value. In each partition, the messages are ordered by the offset value and then stored. When the message is consumed by a consumer, it also gets the partition ID and offset value of the received message.

Key Points to Note

Now, let us see a few key points to remember about the Kafka framework:

  • The messages don’t get deleted upon consumption. They live until a configured expiry time is reached.
  • At any point in time, the messages can be re-consumed by using their offset value.
  • A message can be published to a topic and a message can be consumed from a topic.
  • A message can be uniquely identified by using the combination of the topic name, partition ID, and the offset value of the message.

Conclusion

I hope this article gave you an overview and an architectural insight to Apache Kafka. I’ll see you in my future article about creating a producer and consumer application for Kafka using .NET in the C# language.

Happy reading!

The post New to Big Data? Start with Kafka appeared first on CodeGuru.

]]>
Getting Down to Basics with User Acceptance Testing (UAT) https://www.codeguru.com/research/getting-down-to-basics-with-user-acceptance-testing-uat/ Wed, 03 Feb 2016 08:15:00 +0000 https://www.codeguru.com/uncategorized/getting-down-to-basics-with-user-acceptance-testing-uat/ By Nilesh Patel, KMS Technology Introduction Towards the end of development, every piece of software should be subjected to a final phase of testing. It doesn’t matter if you call it user acceptance testing, end user testing, or beta testing. The aim is the same: to ensure that the application meets the intended business functions […]

The post Getting Down to Basics with User Acceptance Testing (UAT) appeared first on CodeGuru.

]]>
By Nilesh Patel, KMS Technology

Introduction

Towards the end of development, every piece of software should be subjected to a final phase of testing. It doesn’t matter if you call it user acceptance testing, end user testing, or beta testing. The aim is the same: to ensure that the application meets the intended business functions for end users in real-world conditions. User acceptance testing (UAT) enables developers to make a final round of adjustments before the software is released. It’s the last chance to raise any lingering issues and confirm that the software actually does what it’s intended to do. This requires a change of gear from the test team.

A Fresh Perspective on User Acceptance Testing

To simplify, good testers are masters of breaking things. They can look at an application, consider how it works, and immediately imagine a number of ways that you might trip it up. Testers often focus on negative scenarios to unearth defects. They bring their functional testing experience and technical knowledge to bear.

End users are different. They’re focused on positive scenarios because they try to achieve goals, which are typically aligned with the business functions the application was created to fulfill. In other words, they don’t go out of their way to break apps. They also have little understanding of what’s going on behind the scenes. They don’t know what database is being accessed or what technology is being used, and they don’t care. They just want things to work as expected.

Know Your User

Testers have no problem creating detailed functional tests based on a set of requirements, but with UAT the test cases have to be simpler. They should be drawn from use cases and they can’t afford to be too specific because there’s a lot of value in seeing how end users go about completing a task. Do they make assumptions and take wrong turns? Are there problems that the developers and testers can’t see because they’re so close to the application and so are familiar with it?

To create a good set of end user tests, it’s important to understand who the end users are. What does the typical user of this application look like? What age are they? What gender? Depending on the application in question, there could be other important demographics. This information is obviously necessary for recruiting beta testers, but it also helps to create solid use cases.

A broad range is often desirable and so it may be necessary to design test cases for different groups of end users. If they lack a lot of business domain knowledge, they might require more support and simpler test cases.

Understand the Business Reasons for User Acceptance Testing

Sometimes, testers might create use cases, but usually they’ll be provided by business analysts. It’s vital that testers are able to discuss the UAT with the business team and build a clear picture of what kind of feedback they’re looking for. Is the focus on uncovering defects, finding usability problems, or a mixture of both?

This will really help testers to create the right kinds of workflows and checklists for the beta testers to complete. There’s also a chance that the business team will want a particular focus on a certain set of features, or that something will crop up in the first round of UAT that they feel requires more investigation, and the test cases will have to be amended to accommodate that.

Real-world Conditions

To get as clear a picture as possible from the UAT, it’s important to use real-world data, not test data. Ideally, you’ll test in the production environment. The aim is to mimic the final real-world application conditions as closely as possible. If you use a test application or a separate test server, you’re not necessarily going to get the full picture.

Find ways to emulate your expected user numbers and consider potential complications. End users may encounter issues with software conflicts or browser plug-ins. Beta testing and real-world conditions really widen the net for catching defects.

When the UAT begins, testers will have to work to interpret and categorize the results coming in. They’ll need to differentiate between genuine defects and feedback issues on usability. Any problems with the test cases should become clear pretty quickly, and they can be tweaked and refocused through discussion with the business team.

The closer that UAT can get to emulating the final real-world scenario for the application in terms of data, environment, and users, the more confident everyone can be about signing off and pushing it live.

About the Author

Nilesh Patel is QA Manager for KMS Technology, (www.kms-technology.com) a provider of IT services across the software development lifecycle with offices in Atlanta and Ho Chi Minh City. He was previously at LexisNexis doing independent software testing. Contact him at nileshpatel@kms-technology.com.

The post Getting Down to Basics with User Acceptance Testing (UAT) appeared first on CodeGuru.

]]>