Wanna be a DevOps Engineer? Here’s How!

Role and Responsibilities of a DevOps Engineer - Kovair Blog

DevOps is the fusion of social thinking approach, practices, and apparatuses that builds an association’s capacity to deliver products and service at high pace: developing and adapting products at a quicker speed than businesses utilising customary software development and infrastructure management procedures. This speed empowers companies to give their customers comprehensive services and stay ahead of their contemporaries. DevOps is the posterity of agile software development – conceived from the need to stay inline with augmented programming speed, and throughput agile strategies have accomplished. Development in agile culture and approaches over the past few years revealed the requirement for a more universal approach for the end-to-end software delivery lifecycle.

Who is a DevOps Engineer?

DevOps Engineer Starter Guide – Stackify

DevOps Engineer is a professional who comprehends the Software Development Lifecycle and has the inside and out knowledge of different automation technologies for creating advanced pipelines (like CI/CD). DevOps Engineers works with designers and the IT team to manage the code discharges. They are either designer who gets inspired by deployment and network operations or system admins who have an interest in scripting and coding and move into the development side where they can planning of testing and deployment.

In DevOps, there is a need to have a continuous and gradual change in the code so that testing and deployment are conceivable. It probably won’t be persistently feasible for DevOps Engineers to do the coding from the start again and again; in that case, they need to know about it. There is a need for DevOps Engineers to associate different components of coding alongside libraries and programming advancement packs and incorporate different parts of SQL data management or various messaging tools for running programming release and deployments with OS and the production foundation. This article walks you through the skills required to be a DevOps Engineer:

1. Knowledge of Prominent Automation Tools

Resultado de imagem para devops wallpaper

DevOps is continually evolving. To guarantee that your DevOps abilities are up to the mark, you should keep yourself updated with the best DevOps tools.  These DevOps tools facilitate faster bug fixes and improved operational support, along with increased team flexibility and agility. They result in happier and more engaged teams and promote cross-skilling, self-improvement and collaborative working. The top DevOps tools are:

a) Bamboo: Bamboo has numerous pre-assembled functionalities that will assist you to automate your delivery pipeline, from builds to deployment. you needn’t bother with that numerous modules with Bamboo, as it does numerous things out-of-the-box with fewer yet more efficient modules.

Bamboo - the Continuous Integration System that interacts smartly with Jira and Bitbucket. Thanks to EPS your specialists are freed from routine work in no time. Consulting, installation, configuration, support, training, etc.

b) Docker: Docker has been one of the most significant DevOps apparatuses out there. Docker has made containerisation mainstream in the tech world, mostly because it makes disseminated development conceivable and computerises the deployment of your applications. It separates applications into discrete holders, so they become convenient and increasingly secure.

The what and why of Docker. A Beginner's guide to Docker — how to… | by Shanika Perera | Medium

c) Git: Git is one of the most renowned DevOps tools and is extensively used across the DevOps industry. It’s a distributed source code management tool that is highly appreciated by remote team members, freelancers, and open-source contributors. Git enables you to track the progress of your development work.

Git | Jenkins plugin

d) Jenkins: It is a reliable and most trusted automation tool for a great number of DevOps teams across the globe. It’s an open-source CI/CD server that enables the engineers’ to mechanise various phases of the delivery pipeline. Its vast plugin ecosystem has made it a very renowned and popular tool. As of now, it offers more than 1,000 plugins and still counting, and so it integrates with majority DevOps tools.

PHPro - Jenkins en Pipeline

e) Raygun: Spotting bugs and finding execution issues is a fundamental need of the DevOps procedure. Raygun is an application execution observing tool that can assist you with discovering bugs and find execution issues through continuous checking.

Raygun - CI/CD Tools Universe

f) Gradle: Gradle is a developer fabricated tool that is utilized by tech-biggies like Google to assemble applications and is displayed in a manner that is extensible in most elementary ways. For instance, Gradle can be utilized for native development with C/C++ and can likewise be extended to cover other programming languages and platforms.

Gradle - Wikipedia

g) Ansible: Ansible is an open-source application development, config management, and programming provisioning tool that can run on UNIX-based frameworks just as Windows-based frameworks. This DevOps tool designs a framework for software development and furthermore automatic deployment and delivery.

Setting up your development environment with Ansible - Roelof Jan Elsinga

h) Kubernetes: While the Docker permits you to run applications in compartments, Kubernetes goes above and beyond by permitting engineers to run holders in a group in a protected way. With Kubernetes, designers can consequently oversee, screen, scale, and convey cloud-native applications. Kubernetes works as an amazing orchestrator that oversees communication among units and directs them as a group.

Why Is Storage On Kubernetes So Hard? - Software Engineering Daily

Puppet:

Puppet | Fuzzco | Puppets, Shop logo, Tech company logos

A puppet is a renowned tool utilized for configuration management. It is an open-source stage that has a decisive language depicting its framework arrangement. It can run on an assortment of frameworks, including Unix-based frameworks, IBM centralized server, macOS Servers, Cisco switches, and Microsoft Windows. It is basically used to pull strings on various application servers without a moment’s delay.

Elk Stack

Creating a Multi-Node ELK Stack – Burnham Forensics

Elk Stack is a mix of three open-source ventures – Elasticsearch, Logstash, and Kibana that is helpful to gather bits of knowledge into your log information. With its downloads exceeding millions, Elk Stack is one of the most well-known management platforms. It is a superb DevOps tool for associations that need centralized logging framework. It accompanies a ground-breaking and flexible innovation stack that can streamline the outstanding burden of tasks and furthermore offer you business insights for no extra cost.

2. Programming Skills and a basic understanding of Scripting Languages

Difference Between Programming, Scripting, and Markup Languages - GeeksforGeeks

A DevOps Engineer need not be a coding expert but must have the fundamental knowledge of coding and scripting. These languages are mostly utilized in designing the automation processes and to achieve continuous integration/continuous delivery (CI/CD). Top DevOps Programming Languages are:

C: In this internet era, the majority of the code is written in C, and different languages reuse a significant number of its modules to facilitate the programming experience. Learning C is substantial so as to have the elementary knowledge of coding and to work on KVM and QEMU ventures.

JavaScript: The entire world wide web is the offspring of JavaScript. Many of the most well-known systems and libraries are written in JavaScript, from Angular to React and Node. Back end execution isn’t the only thing that this language brings to the table: the monstrous network of engineers implies that there’s consistently help accessible on GitHub or Stack Overflow. JavaScript is a sure thing for engineers.

Javarevisited: Top 10 Courses to Learn JavaScript in 2021 - Best of Lot

Python: It has been utilized to fabricate cloud infrastructure tasks and assists web applications through systems like Django. Python is an agreeable all-purpose with a wide scope of utility. Python additionally upholds great programming rehearses through its elaborate prerequisites, which guarantees that code composed by one individual will be understandable to another- – a significant element in a DevOps world, where visibility should be constant.

Top 11 Python Frameworks in 2018 – Stackify

Ruby: Ruby advantages from an enormous assortment of community-produced modules that anybody can incorporate into applications to add usefulness without composing new code themselves. It empowers an entirely adaptable way to deal with programming and doesn’t anticipate that designers should adopt a specific strategy to compose code.

Ruby Programming Jobs in Serious Decline: Dice Data

3. CI/CD (Continuous Integration/Continuous Delivery)

Continuous integration vs. continuous delivery vs. continuous deployment

Information on different automation tools isn’t sufficient as you should also know where to utilize these. These automation tools ought to be utilized so as to encourage Continuous Integration and Continuous Delivery. Continuous integration and Continuous Delivery are the procedures where your development squad includes constant code changes that are pushed in the principle branch while guaranteeing that it doesn’t affect any progressions made by designers working parallelly.

4. Software Security

Secure Software Development: Step-by-Step Guide

DevSecOps (Security DevOps) has emerged as one of the tech buzzwords in the previous year for a reason being that DevOps helps in creating and deploying programs way more quickly, it likewise makes a lot of vulnerabilities, since security groups can’t stay aware of the quicker cycle. Basically, not just excellent code but bugs and malware can also be sent a lot quicker at this point. Presenting DevOps without having culminated security forms in the IT-association is a catastrophe waiting to happen. Accordingly, DevOps ought to have the fundamental programming security aptitudes to have the option to bring security into the SDLC directly off the bat.

 5. Efficient Testing Skills

15 Must Have Skills For a Top Automation Tester

DevOps is gigantically affected by how well testing is done in a tech-based company. You can’t robotize the DevOps pipeline if effective constant testing, the procedure of executing automatic tests, isn’t set up. Continuous testing ensures that each computerized trial gets executed the way it should, or there is a huge risk of pushing faulty code straight away to clients, which isn’t acceptable.

 6. Soft Skills

Best Neurologist in Delhi

In addition to the fact that DevOps requires solid abilities like coding and robotization, yet it additionally requires such delicate aptitudes as adaptability, self-inspiration, and sympathy. A DevOps engineer is somebody who constructs associations and mitigates bottlenecks, which is achieved by conversing with individuals. Correspondence and cooperation are the abilities that can represent the moment of truth for a DevOps Engineer in any association. They ought to see how the association runs, who the individuals who oversee it are, and what the association’s way of life is to abstain from making conflict focuses and limitations.

Role of a DevOps Engineer

Senior DevOps Engineer job description template | TalentLyft

DevOps professionals come from a multitude of IT backgrounds and begin the role in different places in their careers. Generally, the role of a DevOps engineer is not as easy as it appears. It requires looking into seamless integration among the teams, successfully and continuously deploying the code. The DevOps approach to software development requires recurring, incremental changes, and DevOps Engineers seldom code from scratch. However, they must understand the fundamentals of software development languages and be thorough with the development tools utilized to make a new code or update the existing one.

A DevOps Engineer works alongside the development team to handle the coding and scripting expected to associate the components of the code, for example, SDKs or libraries and coordinate different parts, for example, informing tools or SQL DBMS that is required to run the product discharge with OSs and generation framework. They ought to be able to deal with the IT framework as per the sustained software code devoted to multi-tenant or hybrid cloud environments. There’s a need to have a provision for required assets and for obtaining the suitable organisation model, approving the launch and checking execution. DevOps Engineers could either be the network engineers who have moved into the coding area or the designers who have moved into operations. In any case, it is a cross-function job that is seeing an immense hike in the manner software is developed and deployed in object-critical applications.

Conclusion:

DevOps Engineer Roles & Responsibilities – BMC Software | Blogs

DevOps isn’t very hard to understand. It just requires a person to have a ton of hard and soft skills. DevOps specialists ought to have the option to do a great deal on the tech side of things — from utilizing explicit DevOps devices and overseeing framework in the cloud to composing secure code and checking mechanization tests. They ought to be people who are passionate about what they do and who are prepared to convey the gigantic measures of significant worth. They ought to be interested and proactive, compassionate and self-assured, solid and reliable. They ought to have the option to place clients’ needs over their teams’ needs and make a move when required. The DevOps job isn’t simple, yet it is absolutely justified, despite all the trouble to turn into a DevOps. To take things off the ground, check what number of the DevOps aptitudes highlighted in this article you have. On the off chance that you come up short on some of them, be proactive and start adapting at the present time!

 

Employing Automation to test Data Interface

Say, you got a DB comprising of huge data with billions of records. You have to showcase it on UI only after making sure that everything you want to represent on UI is accurate and as expected. Incorrect data could impact your business in unknown and serious ways that can lie undetected for months.So, here you might need to plan a new strategy, which should lead to answers of all your questions.One of the finest approach among this strategy should be- to make sure that everything you are showing is validated and verified. This leads to a special type of testing called as Data Interface Testing.

What is Data Interface Testing ??

What is Interface Testing? Types & Example

Before we go ahead with Data Interface Testing, let’s first discuss about data interface. Lots of application in the market are nowadays based on Data Mining or Big Data concept. This helps to streamline the big data and showcase on UI in an adequate manner.

Now, as many people say that there is always some pros and cons for each process. Similarly, even this one has few. One of the biggest one is showcasing huge data. But I have a solution.

There is always a challenge to show the huge data on UI where everything is placed at their respective place with correct data set and correct orientation (if you’re showing the data in graphical representation).

 

So, the interaction between database and User Interface brings the term Data Interface. And to make sure that it works well both ways i.e. request and response results, we call it as Data Interface testing.

Big Question !! 

Can I test this much of data and all of that manually??

Answer is yes, it is possible. But practically not a good way to do the same.

So what …?? Automation ??

Yes.

But what if I don’t have any good knowledge for it ?

Don’t you worry, we got a cheat for you !!!  A tool to test data interface automatically, with a very basic knowledge for Automation/coding.

Automation Tool

Some Info Regarding this tool
This tool is made to test validation and verification of data between database and user interface. To make this tool useful, one can easily use it on it’s own working environment, by customizing the details in provided file and coding as per their requirements. .

How this tool works ?

With it’s main class, it reads multiple files which further executes the methods written in those properties.

For reference, the source code is mentioned below:

Representing Main Class of Tool

Method written in above class is dependent upon various files. One of them is called as Property_Reader file.

This is a custom made file, which executes multiple methods and brings result for main class.

1. Property_Reader_Method

2. db_property

This file comprises of all properties, which helps in setting up connection with server/DB.

Following are the properties used in this file.

  1. Url=jdbc:presto://10.0.11.198:8080/test/default
  2. UserName=root
  3. Password= 12345
  4. ClassName=com.facebook.presto.jdbc.PrestoDriver
  • You can change your URL and credentials as per available server(s).
  • Password can be null too. Depends upon the server details.
  • For current we are using presto as DB.

3. query_column

This comprises of column name for which data needs to be fetched from DB. For every query there should be a unique query name which must be identical in all property files for that query.

For below mentioned example. “testA_count” is the name of query which is unique from rest of the two but same in other property files for queries with same conditions.

Apart from that, irrespective of number of columns available in expected and actual query, it will only bring data for “Column A” column in the result set.

Same case for others too.

4. query_actual

This property file contains the queries created by developers or fetched through server logs file which are created while accessing the application through UI.

5. query_expected

This property file contains the queries created by testers.

By running the above mentioned code for main class, it will create a new result file every time. This file will comprise of end result for executed queries, having expected and actual result with numbers and pass/fail result.

For Failed Case:

Let’s change the actual query to-

Points to be considered:

  • Make sure that the query name should always be the same in all expected, actual and column property file.
  • For every query there should be a unique query name.

Benefits of using this tool are:

  • Any Structured DB can be used for this eg. Presto, MySQL, MS-SQL etc.
  • Platform independent. Can be run on Windows/Ubuntu/Linux.
  • Can be run on a project written in any language.
  • Doesn’t require any prior coding skills or automation knowledge.
  • One can easily put the expected and actual test case in respective property files and can have the result set, with complete information.
  • Can be easily customized as per available resources/requirements

 

Your Guide to API testing: Postman, Newman & Jenkins

API testing is a type of software testing wherein an engineer tests not just the functionality but performance, reliability and security for an application. APIs are tested to examine if the application would work the way it is expected to, as APIs are the core of an application’s functionalities.

What Is API Testing?

API testing during development can reveal issues with API, server, other services, network and more, those which one may not discover or solve easily after deployment.

However, testing APIs is difficult. Instead of just verifying an endpoint’s response, one can have integration tests with Postman to examine and validate the responses. Teams these days may also want to automate running these tests as soon as a deployment is done. One approach we can take is to have our integration tests run every time a developer checks in code to the repo.

Adding this layer of Quality check, can ensure that the existing functionalities still work the way they were expected to, with an additional benefit for the developers to validate that their code is doing exactly what it was intended to.

Tools for API test automation in CI

CI refers to continuous integration. Integration of test scripts and a test tool with the continuous build system where the test scripts can be run along with every new deployment or on a regular basis (daily, weekly or fortnightly)

  1. Postman: Integration tests with Postman.
  2. Newman: Create a PowerShell file that runs these integration tests via command line.
  3. Jenkins: Add a Post Build step in Jenkins to execute the PowerShell script whenever a build is initiated.

How to use Postman with Newman & Jenkins for Continuous Integration

 

API Selection

I have implemented this procedure in our Project using the GPS APIs, but for instantiating here, let’s take up the following APIs:

Open Weather Map: Free public APIs.

I chose this as it is a free collection of APIs that anyone can subscribe to and have their own API keys to operate with.

Create Integration Tests

For the first test, let’s take up a simple GET request to get Weather by ID. To interact through the APIs, make sure to use the API key received on subscribing to the OWM services.

Steps to First Integration Test

Make an environment on Postman say, ‘Weather Map’ and define the environment variables in it. [Refer ‘Managing environments’].

Add the Prerequisites in the Pre-Req tab to set up the test.

Collections

Like the above API tests, one can have multiple test scripts for multiple endpoints. And these multiple test scripts can be run in sequence to have an end to end test suite. The way to have a test suite is to keep multiple test scripts in a place holder called as a Collection in Postman.

These collections can then further be executed through the collection runner in the tool.

Collection Runner

A collection runner can be used to have a collection of API endpoints with their test scripts at one place and therefore run them one by one in a sequential manner. The user just needs to run the collection just once with all the required test data, test scripts and for as many iterations one may want. The result of the collection run is a test report, comprehensive enough to monitor the performance of the APIs and also to re-try running the failed test scripts.

For elaborate study on Collection Runners, refer Link.

Though the user interface of Postman’s collection runner is good enough, yet, to integrate the system with Jenkins, we need to run our collections via command line. So, a way to run collections via the command line is through Newman.

Newman

Newman is a Node Package Manager (NPM) package that permits us to run and test collections directly from the command line.

Pre-requisites:

  • NodeJS and
  • NPM already installed.

Commands to be run on Windows Powershell

  • node -v [to verify the version of NodeJs installed]
  • npm -v [to verify the version of NPM installed]
  • $npm install -g newman [to install Newman]

Once the required installations are done, one needs to have his collections and Environment exported to JSON files in the local system. These files can then be passed as arguments to Newman.

Steps to get the environment and collections on the local system:

  • Click on the Download and Export button in Postman.
  • Download the collection
  • Download the environment
  • Open command prompt and raise your privileges. This is important for you to execute the script.

Adding Postman Tests to Jenkins

Testing REST APIs with Newman | R-bloggers

We first need to export our Postman files (Environment and Collections) and add them to GIT, along with our Powershell script to run the tests through Jenkins build.

“Add the Postman files to the root of the project.”

Telling Jenkins to run Newman

For this we write a script that calls Newman and passes it the Environment and Collection JSON files.

–  ‘exit $LASTEXITCODE’: On typing this command, you will see the result of the last command. We do this to make sure that on every loop, the Newman command is successful. If any of the tests fail, we have to stop the script and exit 1. It will result in a failed build on Jenkins.

Adding Script to Jenkins

Steps:

  • Login to Jenkins and create a Freestyle Project.
  • Start by configuring this project to pull your repo code from Git.
  • In the General Tab, go to build section

Running the build and monitoring Results

Try running the project and examine the results.

One can make out successful entrance into the powershell script with the statement ‘Inside Powershell script’ in the Jenkins output.

Conclusion

Improving continuous integration utilizing Postman, Newman and Jenkins adds another layer of quality assurance into our development life cycle. While this is a huge step in automation, we need to emphasize on the fact that our test coverage depends on the quality of our test scripts.

Ways to better data processing in Self-Driving Cars

autonomous vehicle development

Autonomous cars promise to change the face of transportation, offering many more mobility options for individual motorists and companies alike. In moving forward with this new technology, our automotive clients have a very important challenge to overcome: processing the petabytes of data that gets collected during the development and testing of autonomous driving systems.

KPIs have always been important to car makers. They are necessary to attain road approvals and to track key competitive differentiators. With autonomous cars, however, car makers are accumulating – and must find ways to process and manage – 10, 20, sometimes 30 times the data as before.

As a result, they need much more efficient data analysis tools that can help them analyze the data for the specific autonomous car KPIs they are looking for. To make this happen, they need to take the following four steps:

  1. Make sure the car’s sensors are working. There are typically eight to 12 sensor systems in an autonomous vehicle test car. It’s important to look at the data at the very beginning of the workflow by checking the KPIs to ensure that the system works properly. Some of the KPIs car testers evaluate include the following: vehicle operations, safety, environmental impact and in-car network efficiency.
  2. Scale the workflow to process the data. Traditional architectures of automotive frameworks are not suited for the large-scale data processing workloads required for testing the algorithms used in autonomous car tests. In using traditional data storage methods, vehicle test data gets stored in NAS-based storage and gets then transferred to workstations, where engineers test algorithms under development. This process has two downsides:
    • Large amounts of data must be moved, requiring considerable time and network bandwidth.
    • Individual workstations do not offer the massive computing power required to return test results fast enough.

    Today, testers are extracting each frame of video data with its associated Radar, Lidar and sensor data by using open source Hadoop. The major benefit of Hadoop is that it scales processing and storage to hundreds of petabytes. This makes it a perfect environment for testing autonomous driving systems.

  3. Make the most of data analytics. In processing petabytes of automotive data, we have to look at how we present the data to higher level services. New data analysis tools can read different automotive formats to give us proper levels of access to the metadata and data. For example, say we have 700 video recordings, we now have tools that can pinpoint footage from the front-right camera alone to show how the car performed making right-hand turns. We can also use the footage to determine the accuracy of a model depicting the autonomous car’s perception of its ambient physical surroundings .
  4. Run the data analysis. In the end, we want to use data analysis tools to give R&D engineers a complete view of how the car has performed in the field. We want to generate information on how the systems will react under normal driving conditions.

Overcoming these data analysis challenges is critical. Manufacturers can’t obtain permits for releasing their cars until they can show that the cars performed up to certain standards in road tests. And when autonomous cars do start to hit the roadways in the next few years, auto manufacturers might need the KPIs they generated in testing. A few accidents are inevitable and, when questions arise, car makers can use KPIs to show the authorities, insurance companies and the general public how the cars were tested and that proper due diligence was performed.

Right now, there’s some distrust among the driving public of autonomous cars. It will take a massive public relations effort to convince consumers that autonomous cars are safer than traditional manually-driven cars. But proving that case all starts with the ability to process the data more efficiently

Autonomous cars promise to change the face of transportation, offering many more mobility options for individual motorists and companies alike. In moving forward with this new technology, our automotive clients have a very important challenge to overcome: processing the petabytes of data that gets collected during the development and testing of autonomous driving systems.

KPIs have always been important to car makers. They are necessary to attain road approvals and to track key competitive differentiators. With autonomous cars, however, car makers are accumulating – and must find ways to process and manage – 10, 20, sometimes 30 times the data as before.

self-driving vehicle technology

As a result, they need much more efficient data analysis tools that can help them analyze the data for the specific autonomous car KPIs they are looking for. To make this happen, they need to take the following four steps:

  1. Make sure the car’s sensors are working. There are typically eight to 12 sensor systems in an autonomous vehicle test car. It’s important to look at the data at the very beginning of the workflow by checking the KPIs to ensure that the system works properly. Some of the KPIs car testers evaluate include the following: vehicle operations, safety, environmental impact and in-car network efficiency.
  2. Scale the workflow to process the data. Traditional architectures of automotive frameworks are not suited for the large-scale data processing workloads required for testing the algorithms used in autonomous car tests. In using traditional data storage methods, vehicle test data gets stored in NAS-based storage and gets then transferred to workstations, where engineers test algorithms under development. This process has two downsides:
    • Large amounts of data must be moved, requiring considerable time and network bandwidth.
    • Individual workstations do not offer the massive computing power required to return test results fast enough.

    Today, testers are extracting each frame of video data with its associated Radar, Lidar and sensor data by using open source Hadoop. The major benefit of Hadoop is that it scales processing and storage to hundreds of petabytes. This makes it a perfect environment for testing autonomous driving systems.

  3. Make the most of data analytics. In processing petabytes of automotive data, we have to look at how we present the data to higher level services. New data analysis tools can read different automotive formats to give us proper levels of access to the metadata and data. For example, say we have 700 video recordings, we now have tools that can pinpoint footage from the front-right camera alone to show how the car performed making right-hand turns. We can also use the footage to determine the accuracy of a model depicting the autonomous car’s perception of its ambient physical surroundings .
  4. Run the data analysis. In the end, we want to use data analysis tools to give R&D engineers a complete view of how the car has performed in the field. We want to generate information on how the systems will react under normal driving conditions.

Overcoming these data analysis challenges is critical. Manufacturers can’t obtain permits for releasing their cars until they can show that the cars performed up to certain standards in road tests. And when autonomous cars do start to hit the roadways in the next few years, auto manufacturers might need the KPIs they generated in testing. A few accidents are inevitable and, when questions arise, car makers can use KPIs to show the authorities, insurance companies and the general public how the cars were tested and that proper due diligence was performed.

Right now, there’s some distrust among the driving public of autonomous cars. It will take a massive public relations effort to convince consumers that autonomous cars are safer than traditional manually-driven cars. But proving that case all starts with the ability to process the data more efficiently

error: Content is protected !!