In daily work, it is necessary to frequently copy/move a large number of files from a remote or local server. When the file is more scattered, the speed is slower, so I wonder if there is a faster way. After collecting, sorting, and verifying, there are probably the following. First of all, no matter whether it is local or remote, if you need to move or copy more files and not too big, use cp command and mv command to be less efficient. You can use tar tool to package/compress the content to be copied/moved first. Then copy/move and finally unpack/decompress. In addition, it is also a key skill, that is, you do not have to copy, unpack/decompress after tar is packaged/compressed, and you can perform packet unpacking/decompression by packing/compressing the other side through the pipeline. For example, the tar command can be combined with the nc command to quickly transfer files and directories between two machines: B machine: nc -l 5555 |
Tar -C /tmp/test/-xf -A machine: tar cf - /tmp/test/|
Nc B'IP 5555 The above steps copy the contents of the A machine /tmp/test/to the corresponding directory of the B machine, where tar cf - /tmp/test/|
Nc B'IP 5555 packs the content side by pipe and nc command to the B machine by the corresponding IP address and port 5555, nc -l 5555 |
Tar -C /tmp/test/-xf - Listens to the 555 port of the machine and unpacks the received content to the specified directory (the -C parameter specifies the target directory). In addition, tar can be combined with the scp and ssh commands: After the A machine is packaged, copy it to the B machine and unpack tar -cf - /tmp/test |
Ssh B'IP "cd /tmp; tar -xf -" packaged in machine A, and copied the packaged file to machine B tar -cf - /tmp/test |
Ssh B'IP "cd /tmp; cat - > test.tar"tar -cf - /tmp/test |
Scp - B'USER@B'IP: /tmp copy the package file of machine A to machine B and unpack zcat test.tar |
Ssh B'IP "cd /tmp; tar -xf -" can also be used directly locally: cd /tmp/test1tar -cf - . |
(cd /tmp/test2 ; tar -xvpf -) But some people have come to the conclusion that the local direct use of cp is faster. Other tricks: copying the directory in addition to copying a single file, sometimes together with the attributes of the file/directory copy. You can use the -R parameter to recursively copy the directory in the cp command, and use the -p parameter to copy the file retention attribute (the default is: mode, ownership, timestamps can also specify the attributes to be reserved by --preserve[=ATTR_LIST] such as: context. Links, xattr, all), use the -d parameter to copy the file to retain the connection. Or simply use the -a parameter (equivalent to using -dR --preserve=all). If you want to see the progress of copying a large number of small files, you can write a simple little script: cd /tmp/testfor i in *docp $i target directory Echo $i is ok....
done
Finally add a trick that is not tricky: Before using a tool to complete a task, think about whether the tool currently in use is the most suitable tool? Is there a better tool or method? If the tool is really suitable for the current task, is there any special technique to improve productivity when using the tool? (Usually, viewing help files can be a surprise.)