Emacs vs. Webstorm for Node.js development

If you are a developer, you know the struggle of choosing the right tools for your projects. Opting for the right editor or IDE is a great hassle if you don’t have your facts right. In software development, Emacs and WebStorm are two very common names, both of them being tools that support Node.js Development. However, the question arises that which of these two tools should you consider while starting your Node.js project.

Emacs vs. WebStorm for Node.js Development

Node JS Development Services in Hyderabad | Creo Tech Solutions

In this article, I shall guide you through all the details, from big to small, regarding Emacs and Webstorm to help you choose which of these two might be better for your project.

Similarities

There are a few features that are shared by both Emacs and WebStorm. Firstly, both of them are capable of connecting to external code quality tools such as ESLint. Both of them can connect to these tools and provide real-time code analysis and bug spotting. Secondly, either one of the tools can provide a deep analysis of JavaScript mode. WebStorm has its exclusive JavaScript analysis engine whereas Emacs has its own js2 mode. Both aid in finding the bugs and issues in the code like finding a function that does not return a value and can perform minor tasks such as extracting variables too. Lastly, both the tools greatly support smart auto-completion.  Where Emacs does this through Tern, which is an open-source JS code analyzer that various editors can connect with. On the other hand, WebStorm does it using its own engine which, in addition, parses JSDoc comments and TypeScript descriptor documents.

Why WebStorm?

WebStorm Logo " Sticker by SuperJellyfish | Redbubble

Although Emacs is great yet there are a few things that lie out of its scope and just cannot be done right with it. Disparagingly, few of the peskiest things that Emacs can not do, can be done by WebStorm, making it a great choice for developers.  One of the major aspects of this domain is debugging. The basic debugger in the Node.js is terribly slow. WebStorm emerges as a great alternative in this case. It wouldn’t be wrong to say that this debugger in itself enough to make a developer opt for it over anything else for major JavaScript Development. Though Emacs does have an inbuilt editor for debugging, it fails to work with the default Node.js debugger. Even if it did work, it would’ve been taken back by the utterly sluggish performance of the default debugger.

Definition and Symbol lookup is another incredible component of WebStorm. While Emacs can discover definitions and symbols in a solitary document by means of Tern, WebStorm can really review your entire task and discover a definition, or if nothing else give you a very pruned rundown of contenders to look over. This makes exploring through code exponentially quicker, and it truly diminishes the strain of setting exchanging between documents.

Testing integration is the last enormous success for WebStorm. When utilizing the most well-known JavaScript testing systems, Jasmine and Mocha, WebStorm removes the need for running individual tests, entire test files, and entire test suites with only a couple of keyboard shortcuts.

If we join this last component with debugging and code navigation between documents, WebStorm turns into an astounding TDD tool. Emacs can be inconsequentially set up to run an entire test suite, however not to run individual tests for any of the regular JavaScript testing frameworks. While this is possible in Emacs too, (such integration exists for Ruby’s RSpec tools), nobody has contributed to the opportunity to make it work.

Why Emacs?

Laravel & Emacs - Part 1 | Grok Interactive

Regardless of the considerable number of pros WebStorm has, there are still various things Emacs is better at. One of the most significant of those is auto-completion. I have already mentioned above that both Emacs and WebStorm can do some degree of smart, context-aware completion. The truth of a dynamic scripting language like JavaScrip is that on multiple occasions, there is no relevant completion accessible as there is an excessive vagueness or the code has improper structure.

In these cases, WebStorm simply doesn’t offer any completion, compelling you to type everything physically. Emacs accomplishes something amazing in this case. It basically tokenizes the majority of the words in each buffer of a similar kind, for example, each JavaScript buffer, and takes a gander at the characters you have begun to type and attempts to discover a match. It is nothing but a fact that often in the event that you type around three characters of a word, it will offer you around 3-5 choices of words starting from those three characters. This radically improves the speed as you don’t need to type the whole words to get going.

Something else Emacs is commendable at is Vim emulation. While WebStorm’s Vim copying is good too, but it does encounter bugs frequently. These incorporate bugs like keys not enlisting, getting into abnormal states where you seem to be somewhere between the normal and insert mode, and a peculiarly delicate mouse choice that consistently appears to place you in visual when you didn’t ask it to. Also, some propelled highlights like macros once in a while work directly for rarely work. It also fails to copy things on the system clipboard, which Emacs does as a matter of course. Also, large file editing or micro-driven refactoring is also better done in Emacs.

Summary

Like every other app development tool, Emacs and Webstorm too, have their own advantages and disadvantages. A significant disadvantage is automated refactoring. Automated refactoring is an incredibly tricky problem in a dynamic language. Neither Emacs or WebStorm is capable of going beyond extractions or simple variable renames. It can make refactoring across files a wearisome, error-prone process. Similarly, both tools lack at generating code for JavaScript. Either of the tools supports expanding developer-defined templates, but nothing else. WebStorm appears to be at least trying a bit in this regard. Presently, it can generate a method that you call in code, but only into the same file you are in, and it will often call the method wrong. A developer must consider all the points before choosing the right tool and beginning the development so as to utilize the features to their utmost capacity.

Learn How to Start Working with KAFKA and More

Top 10 Courses to Learn Apache Kafka in 2021 | by javinpaul | Javarevisited | Medium

Kafka is a distributed publish-subscribe messaging system that is designed to be fast, scalable, and durable. It was developed by LinkedIn and open-sourced in the year 2011. It makes an extremely desirable option for data integration with the increasing complexity in real-time data processing challenges. It is a great solution for applications that require large-scale message processing.

Components of Kafka are :

  1. Zookeeper
  2. Kafka Cluster – which contains one or more servers called brokers
  3. Producer – which publishes messages to Kafka
  4. Consumer – which consumes messages from Kafka.

Components :

Kafka Architecture and Its Fundamental Concepts - DataFlair

It saves messages on a disk and allows subscribers to read from it. Communication between producers, Kafka clusters, and consumers takes place with the TCP protocol. All the published messages will be retained for a configurable period of time. Each Kafka broker may contain multiple topics into which producers publish messages. Each topic is broken into one or more ordered partitions. Partitions are replicated across multiple servers for fault tolerance. Each partition has one Leader server and zero or more follower servers depending upon the replication factor of the partition.

When a publisher publishes to a Kafka cluster, it queries which partitions exist for that topic and which brokers are responsible for each partition. Publishers send messages to the broker responsible for that partition (using some hashing algorithm).

Consumers keep track of what they consume (partition id) and store it in Zookeeper. In case of consumer failure, a new process can start from the last saved point. Each consumer in the group gets assigned a set of partitions to consume from.

Producers can attach key with messages, in which all messages with same key goes to same partition. When consuming from a topic, it is possible to configure a consumer group with multiple consumers. Each consumer in a consumer group will read messages from a unique subset of partitions in each topic they subscribe to, so each message is delivered to one consumer in the group, and all messages with the same key arrive at the same consumer.

Role of Zookeeper in –

What Is Apache Kafka? | How Kafka Works | OpenLogic

It provides access to clients in a tree-like structure. It use ZooKeeper for storing configurations and use them across the cluster in a distributed fashion. It maintains information like topics under a broker, offset of consumers.

Steps to get started with (For UNIX):

  1. Download Kafka – wget http://mirror.sdunix.com/apache/kafka/0.8.2.0/kafka_2.10-0.8.2.0.tgz
  2. tar -xzf kafka_2.10-0.8.2.0.tgz
  3. cd kafka_2.10-0.8.2.0/
  4. nside Config folder you will see server, zookeeper config files
  5. Inside bin folder you will see bash files for starting zookeeper, server, producer, consumer
  6. Start zookeeper – bin/zookeeper-server-start.sh config/zookeeper.properties
  7. Start server – bin/kafka-server-start.sh config/server.properties
  8. creating topic –bin/kafka-topics.sh –create –zookeeper localhost:2181 –replication-factor <your_replication_factor> –partitions <no._of_partitions> –topic <your_topic_name>
    This will create a topic with specified name and will be replicated in to brokers based on replication factor and topic will be partitioned based on partition number. Replication factor should not be greater than no. of brokers available.
  9. view topic – bin/kafka-topics.sh –list –zookeeper localhost:2181
  10. delete a topic – add this line to server.properties file delete.topic.enable=true
    then fire this command after starting zookeeper
    bin/kafka-topics.sh –zookeeper localhost:2181 –delete –topic <topic_name>
  11. alter a topic – bin/kafka-topics.sh –zookeeper localhost:2181 –alter –topic <topic_name>
  12. Start producer – bin/kafka-console-producer.sh –broker-list localhost:9092 –topic <your_topic_name> and send some messages
  13. Start consumer – bin/kafka-console-consumer.sh –zookeeper localhost:2181 –from-beginning –topic <your_topic_name> and view messages

If you want to have more than one server, say for ex : 4 (it comes with single server), the steps are:

  1. create server config file for each of the servers :
    cp config/server.properties config/server-1.proeprties
    cp config/server.properties config/server-2.properties
    cp config/server.properties config/server-3.properties
  2. Repeat these steps for all property files you have created with different brokerId, port.
    vi server-1.properties and set following properties
    broker.id = 1
     port = 9093
    log.dir = /tmp/kafka-logs-1
  3. Start Servers :
    bin/kafka-server-start.sh config/server-1.properties &
    bin/kafka-server-start.sh config/server-2.properties &
    bin/kafka-server-start.sh config/server-3.properties &

Now we have four servers running (server, server-1,server-2,server-3)

PROGRAMMING

The program in java includes producer class and consumer class.

Producer Class :

Producer class is used to create messages and specify the topic name with an optional partition.

The maven dependency jar to be included is

We need to define properties for a producer to find brokers, serialize messages, and sends them to the partitions it wants.

Once producer sends data, if we pass an extra item (say id) via data :
ex :

before publishing data to brokers, it goes to the partition class which is mentioned in the properties and selects the partition to which data has to be published.

Consumer Class :

Topic Creation :

It is a distributed commit log service that functions much like a publish/subscribe messaging system, but with better throughput, built-in partitioning, replication, and fault tolerance.

Applications:

  1. Stream processing
  2. Messaging
  3. Multiplayer online game
  4. Log aggregation
error: Content is protected !!