Generated by DeepSeek V3.2| File Transfer Protocol | |
|---|---|
| Name | File Transfer Protocol |
| Developer | Abhay Bhushan for the ARPANET |
| Date | 16 April 1971 |
| Osi layer | Application layer |
| Ports | 21 (control), 20 (data) |
| Rfc | RFC 959 |
File Transfer Protocol. It is a standard network protocol used for the transfer of computer files between a client and server on a computer network. Developed in the early 1970s for use on the ARPANET, it has become a foundational technology for data exchange, operating within the application layer of the TCP/IP model. The protocol uses separate TCP connections for control commands and data transfer, facilitating reliable file management across diverse systems.
The protocol was first defined in RFC 114 by Abhay Bhushan of the Massachusetts Institute of Technology in 1971, with its modern specification established later by RFC 959. Its creation was driven by the needs of the early ARPANET, a precursor to the modern Internet, to enable efficient resource sharing. For decades, it served as a primary method for distributing software, documents, and datasets, heavily utilized by institutions like CERN and academic networks. The protocol's design, which separates control and data channels, influenced subsequent network applications and remains a key subject of study in computer science.
Operation relies on a clear client-server architecture, where the client initiates a connection to a server's well-known port 21 for the control connection. This channel is used for sending commands, such as those defined in the protocol's command set, and receiving reply codes from the server. A separate data connection, traditionally on port 20, is dynamically established for the actual transfer of files or directory listings. Communication is conducted in plain text, with sessions managed through a series of standardized commands and numeric responses, a method also seen in protocols like Simple Mail Transfer Protocol.
The protocol supports several modes for structuring data transmission over the established data connection. In stream mode, data is sent as a continuous stream of bytes, which is the default and most common method. Block mode divides the data into blocks preceded by headers, while compressed mode can apply simple run-length encoding, though this is rarely implemented in modern clients. Furthermore, transfers can be conducted in either active or passive mode, which determines which host initiates the data connection, a crucial distinction for traversing network configurations involving firewalls or Network Address Translation.
A significant limitation is the lack of encryption for both control commands and data, meaning credentials and file contents are transmitted in plaintext, vulnerable to interception via tools like Wireshark. To address this, several secure variants were developed. SSH File Transfer Protocol provides secure file transfer within the encrypted Secure Shell protocol suite. More directly, explicit Transport Layer Security was integrated to create FTPS, as defined in RFC 4217. Additionally, Trivial File Transfer Protocol offers a simpler, connectionless alternative using the User Datagram Protocol, often used for booting diskless workstations.
Numerous software implementations exist across all major operating systems. Command-line clients are built into Unix-like systems and Microsoft Windows, while graphical clients like FileZilla and WinSCP are widely popular. The protocol is commonly used for website maintenance, allowing developers to upload files to web servers running software such as Apache HTTP Server. Despite the rise of Hypertext Transfer Protocol Secure and cloud services, it remains entrenched in enterprise environments, industrial control systems, and for accessing public archives like those hosted by the Internet Archive.