A Data-Aware Shell for Faster Distributed Text Processing – InApps is an article under the topic Software Development Many of you are most interested in today !! Today, let’s InApps.net learn A Data-Aware Shell for Faster Distributed Text Processing – InApps in today’s post !
Read more about A Data-Aware Shell for Faster Distributed Text Processing – InApps at Wikipedia
You can find content about A Data-Aware Shell for Faster Distributed Text Processing – InApps from the Wikipedia website
The Unix command line offers a rich set of data processing tools, such as
awk, for text searching and filtering through large files. But executing these commands on remote data over a campus network, or across a cloud, can bring research to halt, as the data scientist waits for the results to be returned to the command line, or to a local file.
“Shells should consider data locality,” explained Deepti Raghavan, a Stanford University Ph.D. student who is one of the creators of the data-aware The Process Offload Shell (POSH), which she introduced during a presentation at the USENIX SRECon20 Americas conference held virtually last month.
Currently, POSH is a prototype, but the project raises some interesting ideas around the best ways to divide work so that it gets done as quickly as possible, while making it easier for the end-user to execute these tasks. Tests have found a POSH-based approach can offer 1.5 to 15 times data speedup across remote file systems without modifying the data or the standard command line.
POSH includes both a shell and an associated distributed runtime, can speed the processing of remote data by orders of magnitude by moving the computationally-heavy work to where the data resides, such as on an NFS file storage server. Commands are issued locally on the user’s shell but are actually executed on the server with the data, which can greatly expedite processing. Only the output is then shipped back to the command line on the local machines.
Traditional approaches can be a time-sink because they involve moving the data to the client, which can be really slow for large data sets. There have been approaches, like POSH’s, to move the processing to the data. MapReduce and Spark are two examples. But, for the data researcher, they can be cumbersome to use, requiring code to interface with their APIs. “There might be more overhead than it is worth to use these systems,” she said.
The idea is to “run this command closer to the storage without changing the workflow of the developer,” Raghavan said.
POSH offers a shell identical to the canonical Bash shell, but its offloads some of the work that the commands require to proxy servers on or near the data storage. Proxy servers located on these remote storage servers can process the data. “This prevents lots of unnecessary data movement,” Raghavan said.
In order to determine what parts of a workload can be executed remotely, POSH uses a set of annotations and metadata about individual shell commands to best determine where in the shell pipeline to hand off the work to the remote proxy server. In general users shouldn’t have to worry about annotations, though annotations will need to be created for all relevant Unix commands.
This metadata documents the file dependencies of each command, as well as all the options and parameters for each command. In a command involving multiple tools, it needs to understand how much data flows between the commands, and if the command can be parallelized across different servers. The runtime also includes a scheduling algorithm to schedule a workload across multiple servers to achieve an optimal execution time.
When a user types in a command, POSH will generate a Directed Acyclic Graph (DAG) to represent the entire command workflow, which it then can execute:
POSH is best used for I/O-intensive workloads where the data is stored in remote storage, such as NFS. In one test, the researchers used a combination cat and grep command on 80GB of data across five proxy-equipped different servers. The results returned would only be a minuscule .8KB The test was run across both a cloud setup and a traditional university network. There, the team found a 10x speedup compared against the university setup and a 2.5 speedup in the cloud setup:
In another case, the team looked at the speed of three git commands (
status) across a code repository. In this case, the
add command returned results 10-15 times as fast as it would through the traditional approach. In the case of the
addcommand, git returns the status of each file checked, which, in a traditional setup, leads to a lot of back-and-forth between the shell and the remote file server.
“POSH saves on latency by avoiding many round trips,” Raghavan said.
Read the paper here and watch the video here:
List of Keywords users find our article on Google:
|hire bash developers|
|data processing icon|
|data input jobs near me|
|hire awk developers|
|hire unix developers|
|hire shell developers|
|bash for each line in file|
|faster distributed by|
|food processing wikipedia|
|hire shell command developer|
|mv command works linkedin|
|10x server rust|
|unix consultant jobs|
|hire shell developer|
|eve online test server commands|
|best posh products|
|core shell app|
|heavy scientist rust|
|shell work experience|
|hire bash developer|
|easy line remote app|
|remote objective c developer jobs|
|while read line in unix|
|bash read file line by line|
|unix jobs near me|
Let’s create the next big thing together!
Coming together is a beginning. Keeping together is progress. Working together is success.