diff --git a/docs/cesa.md b/docs/cesa.md index 82f990c..93d110c 100644 --- a/docs/cesa.md +++ b/docs/cesa.md @@ -1,2 +1,267 @@ -!!! info - Coming soon. +In this guide we will explain how to leverage on Marvell CESA units of the Armada 388 SoC to accelerate network application encryption and disk encryption. Disk encryption acceleration is very straight forward because it's happening in-kernel with kernel subsystem **dm-crypt** which already supports hardware cryptographic engine. On the other hand, encryption acceleration for userspace network applications, like Apache2, OpenSSH, etc.. requires some patching and recompiling in order to leverage on Marvell CESA units. + +!!! warning "Before you go further !" + This guide is for advanced users who understand the security implication of tweaking encryption library and cipher configuration. + +## What is CESA ? + +The Cryptographic Engines and Security Accelerator (CESA) reduces the CPU packet processing +overhead by performing time consuming cryptographic operations, such as: + +* Advanced Encryption Standard (AES) +* Data Encryption Standard (DES) +* Triple Data Encryption Standard (3DES) encryption +* Message Digest 5 (MD5) +* Secure Hash Algorithm-1 (SHA-1) +* Secure Hash Algorithm 2 with 256 digest bit size (SHA-2) authentication + +The CESA-DMA engine (also called TDMA) controls communication between the main memory and +the internal SRAM. + +### CESA Functional Block Diagram + +![CESA Block Diagram](/img/cesa/cesa_block_diagram.png) + +The above block diagram shows a single CESA unit. + +### Crypto API + +Crypto API is a cryptography framework in the Linux kernel, for various parts of the kernel that deal with cryptography, such as IPsec and dm-crypt. It was introduced in kernel version 2.5.45 and has since expanded to include essentially all popular block ciphers and hash functions. + +### Userspace Interfacing + +Many platforms that provide hardware acceleration encryption expose this to programs through an extension of the instruction set architecture (ISA) of the various chipsets (e.g. AES instruction set for x86). With this sort of implementation any program (kernel-mode or userspace) may utilize these features directly. + +However, crypto hardware engines on ARM System-On-Chip are not implemented as ISA extensions, and are only accessible through kernel-mode drivers. In order for userspace applications, such as OpenSSL, to take advantage of encryption acceleration they must interface with the kernel cryptography framework (Crypto API). + +### Crypto API Interfaces + +There are two interfaces that provide userspace access to the Crypto API : + +* **cryptodev (/dev/crypto)**
cryptodev-linux is a device implemented as a standalone module that requires no dependencies other than a stock linux kernel. Its API is compatible with OpenBSD's cryptodev userspace API (/dev/crypto). + +* **AF_ALG**
AF_ALG is a netlink-based interface that is implemented in Linux kernel mainline since version 2.6.38. + +![Crypto API Interface](/img/cesa/crypto_api_interfaces.png) + + +## Network Application Encryption Acceleration + +The following instructions have been written for **Debian Stretch** and using **cryptodev** as the Crypto API userspace interface. + +You can refer to following forum [thread](https://forum.armbian.com/topic/8486-helios4-cryptographic-engines-and-security-accelerator-cesa-benchmarking/) where we explain why we choose to focus on **cryptodev**. + +### Pre-Prerequisites + +You will need to add *debian source* repository to your APT list in order to download **libssl** source code. Edit */etc/apt/sources.list* and uncomment the following line. + +``` +deb-src http://httpredir.debian.org/debian stretch main contrib non-free +``` + +Don't forget after to update your APT database. + +``` +sudo apt-get update +``` + +In order to compile **cryptodev** and **libssl** you will need to install the following debian packages. + +``` +sudo apt-get install build-essential fakeroot devscripts debhelper +``` + +### Install cryptodev + +``` +sudo apt-get install linux-headers-next-mvebu + +git clone https://github.com/cryptodev-linux/cryptodev-linux.git + +cd cryptodev-linux/ + +make + +sudo make install + +sudo depmod -a + +sudo modprobe cryptodev +``` + +We can check that **cryptodev** is properly loaded with the following: + +``` +lsmod | grep cryptodev +cryptodev 36864 0 + +dmesg | grep cryptodev +[ 154.966710] cryptodev: loading out-of-tree module taints kernel. +[ 154.971590] cryptodev: driver 1.9 loaded. +``` + +To automatically load **cryptodev** at startup you can do the following. But it is strongly advice to do it after you have ensured everything works fine to avoid locking you out from Helios4. + +``` +echo "crytodev" >> /etc/modules +``` + +### Recompile OpenSSL (libssl) + +OpenSSL provides the libssl and libcrypto shared libraries. **libssl** provides the client and server-side implementations for SSLv3 and TLS. + +Under Debian Stretch a lot of applications, like Apache2 and OpenSSH, still depend on libssl from OpenSSL version 1.0.2, however cryptodev is only properly implemented in OpenSSL since version 1.1.1. + +In order to make libssl 1.0.2 supports cryptodev, we will need to recompile Debian libssl1.0.2 after applying the patch that was originally proposed in the following [pull request](https://github.com/openssl/openssl/pull/191) in the OpenSSL project. + + +``` +mkdir libssl; cd libssl + +apt-get source libssl1.0.2 +``` + +Apply the patch that you can find [here](/files/cesa/openssl-add-cryptodev-support.patch). + +``` +wget /files/cesa/openssl-add-cryptodev-support.patch + +patch < openssl-add-cryptodev-support.patch openssl1.0-1.0.2l/crypto/engine/eng_cryptodev.c +``` + +Now let's compile libssl with **cryptodev** enabled. + +``` +cd openssl1.0-1.0.2l/ + +sed -i -e "s/CONFARGS =/CONFARGS = -DHAVE_CRYPTODEV/" debian/rules + +dch -i "Enabled cryptodev support" + +DEB_BUILD_OPTIONS=nocheck dpkg-buildpackage -b -rfakeroot +``` + +!!! note + Most example online will also use the -DUSE_CRYPTODEV_DIGESTS flag. However it was proven via [benchmark](/cesa/#https-benchmark) that using the CESA engine for hashing will result in performance penalty. + +If all goes well you should see couple of .deb files. Look for the libssl .deb file and install it. + +``` +cd .. + +sudo dpkg -i libssl1.0.2_1.0.2l-2+deb9u3.1_armhf.deb +``` + +### Apache2 + +In order to make Apache2 offload encryption to the hardware engine, you will need to force ciphers that use encryption algorithms supported by the Marvell CESA units: + +* AES-128-CBC +* AES-192-CBC +* AES-256-CBC + +Edit */etc/apache2/mods-available/ssl.conf* and modify as follow: + +``` +# SSL Cipher Suite +# +# SSLCipherSuite HIGH:!aNULL +SSLCipherSuite AES128-SHA +``` + +!!! Important + The AES-xxx-CBC are not considered anymore as the most secured ciphers and actually won't be supported anymore in TLSv1.3. So use those ciphers at your own risk. + +### OpenSSH + + +**Server Side:** + +In order to make OpenSSH server offload encryption to the hardware engine, you will need to force ciphers that use encryption algorithms supported by the Marvell CESA units. + +* AES-128-CBC +* AES-192-CBC +* AES-256-CBC + +Edit */etc/ssh/sshd_config* and add the following line. + +``` +# Ciphers and keying +Ciphers aes128-cbc +``` + +**Client Side: (optional)** + +To make your SSH client supports the cipher define in SSH server side, you might need to edit */etc/ssh/ssh_config* and add the following line. + +``` +# Ciphers aes128-ctr,aes192-ctr,aes256-ctr,arcfour256,arcfour128,aes128-cbc,3des-cbc +Ciphers aes128-cbc +``` + +!!! Important + The AES-xxx-CBC are not considered anymore as the most secured ciphers and actually won't be supported anymore in TLSv1.3. So use those ciphers at your own risk. + +### HTTPS Benchmark + +#### Setup + +Apache2 is configured to expose a 1GB file hosted on a SSD connected to Helios4. A test PC is connected to Helios4 Ethernet directly and we use wget command to perform the file download. + +Three batch of download tests, for each batch we configured Apache2 to use a specific cipher that we know is supported by the CESA engine. + +* AES_128_CBC_SHA +* AES_128_CBC_SHA256 +* AES_256_CBC_SHA256 + +For each batch, we do the following 3 download tests : + +1. without cryptodev module loaded (100% software encryption) +2. with cryptodev loaded and libssl (openssl) compiled with -DHAVE_CRYPTODEV -DUSE_CRYPTODEV_DIGESTS +3. with cryptodev loaded and libssl (openssl) compile only with -DHAVE_CRYPTODEV, which means hashing operation will still be done 100% by software. + +#### Results + +|Cipher|CPU User%| CPU Sys%|Throughput (MB/s)| +|---------------|-----|----|-----------------| +|**AES_128_CBC_SHA**| +|Software encryption|46.9|7.9|32.8| +|HW encryption with hashing|6.2|24.6|26.7| +|HW encryption without hashing|19.9|16.4|**47.8**| +|**AES_128_CBC_SHA256**| +|Software encryption|43.1|7.0|28.1| +|HW encryption with hashing|7.0|24.6|27.1| +|HW encryption without hashing|24.1|12.9|**36.6**| +|**AES_256_CBC_SHA256**| +|Software encryption|45.1|5.0|23.9| +|HW encryption with hashing|7.0|24.5|26.7| +|HW encryption without hashing|24.2|12.0|**35.8**| +|**For reference**| +|AES_128_GCM_SHA256
(Default Apache 2.4 TLS cipher. GCM mode is not something that can be accelerated by CESA.)|42.9|7.2|30.6| +|Clear text HTTP|1.0|29.8|112.1| + +!!! note + CPU utilization is for both cores. However each test is just a single thread process running on a single core therefore when you see CPU utilization around 50% (User% + Sys%) it means the core used for the test is fully loaded. + + +**CONCLUSION** + +1. Hashing operation are slower on the CESA engine than the CPU itself, therefore making HW encryption with hashing is performing less than 100% software encryption. + +2. HW encryption without hashing provides 30 to 50% of throughput increase while decreasing the load on the CPU by 20 to 30%. + + +## Accelerate Disk Encryption + +Refer to the following great [tutorial](https://www.cyberciti.biz/hardware/howto-linux-hard-disk-encryption-with-luks-cryptsetup-command/) to setup disk encryption with LUKS. + +## References + +* [An overview of the crypto subsystem](http://events17.linuxfoundation.org/sites/events/files/slides/brezillon-crypto-framework_0.pdf) +* [Utilizing the crypto accelerators](https://events.static.linuxfound.org/sites/events/files/slides/lcj-2014-crypto-user.pdf) +* [Linux crypto](https://www.slideshare.net/nij05/slideshare-linux-crypto-60753522) +* [Crypto API definition](https://en.wikipedia.org/wiki/Crypto_API_(Linux)) +* [Linux Kernel cryptography algorithm implementation process](https://szlin.me/2017/04/05/linux-kernel-%E5%AF%86%E7%A2%BC%E5%AD%B8%E6%BC%94%E7%AE%97%E6%B3%95%E5%AF%A6%E4%BD%9C%E6%B5%81%E7%A8%8B/) +* [Cryptodev benchmark](http://cryptodev-linux.org/comparison.html) +* [Accelerating crypto](https://lauri.võsandi.com/2014/07/cryptodev.html) +* [Hardware Cryptography cryptodev/openssl](https://forum.doozan.com/read.php?2,18152) diff --git a/docs/files/cesa/openssl-add-cryptodev-support.patch b/docs/files/cesa/openssl-add-cryptodev-support.patch new file mode 100644 index 0000000..c31c793 --- /dev/null +++ b/docs/files/cesa/openssl-add-cryptodev-support.patch @@ -0,0 +1,2783 @@ +diff -Naur openssl1.0-1.0.2l.orig/crypto/engine/eng_cryptodev.c openssl1.0-1.0.2l/crypto/engine/eng_cryptodev.c +--- openssl1.0-1.0.2l.orig/crypto/engine/eng_cryptodev.c 2017-05-25 12:54:38.000000000 +0000 ++++ openssl1.0-1.0.2l/crypto/engine/eng_cryptodev.c 2018-10-19 10:29:55.138846957 +0000 +@@ -2,6 +2,7 @@ + * Copyright (c) 2002 Bob Beck + * Copyright (c) 2002 Theo de Raadt + * Copyright (c) 2002 Markus Friedl ++ * Copyright (c) 2012 Nikos Mavrogiannopoulos + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without +@@ -26,15 +27,14 @@ + * + */ + +-#include + #include + #include + #include + #include + + #if (defined(__unix__) || defined(unix)) && !defined(USG) && \ +- (defined(OpenBSD) || defined(__FreeBSD__)) +-# include ++ (defined(OpenBSD) || defined(__FreeBSD__)) ++#include + # if (OpenBSD >= 200112) || ((__FreeBSD_version >= 470101 && __FreeBSD_version < 500000) || __FreeBSD_version >= 500041) + # define HAVE_CRYPTODEV + # endif +@@ -45,39 +45,40 @@ + + #ifndef HAVE_CRYPTODEV + +-void ENGINE_load_cryptodev(void) ++void ++ENGINE_load_cryptodev(void) + { +- /* This is a NOP on platforms without /dev/crypto */ +- return; ++ /* This is a NOP on platforms without /dev/crypto */ ++ return; + } + + #else + +-# include +-# include +-# include +-# include +-# include +-# include +-# include +-# include +-# include +-# include +-# include +-# include +-# include +-# include +-# include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include ++#include + + struct dev_crypto_state { +- struct session_op d_sess; +- int d_fd; +-# ifdef USE_CRYPTODEV_DIGESTS +- char dummy_mac_key[HASH_MAX_LEN]; +- unsigned char digest_res[HASH_MAX_LEN]; +- char *mac_data; +- int mac_len; +-# endif ++ struct session_op d_sess; ++ int d_fd; ++ ++#ifdef USE_CRYPTODEV_DIGESTS ++ unsigned char digest_res[HASH_MAX_LEN]; ++ char *mac_data; ++ int mac_len; ++#endif + }; + + static u_int32_t cryptodev_asymfeat = 0; +@@ -86,196 +87,158 @@ + static int open_dev_crypto(void); + static int get_dev_crypto(void); + static int get_cryptodev_ciphers(const int **cnids); +-# ifdef USE_CRYPTODEV_DIGESTS ++#ifdef USE_CRYPTODEV_DIGESTS + static int get_cryptodev_digests(const int **cnids); +-# endif ++#endif + static int cryptodev_usable_ciphers(const int **nids); + static int cryptodev_usable_digests(const int **nids); + static int cryptodev_cipher(EVP_CIPHER_CTX *ctx, unsigned char *out, +- const unsigned char *in, size_t inl); ++ const unsigned char *in, size_t inl); + static int cryptodev_init_key(EVP_CIPHER_CTX *ctx, const unsigned char *key, +- const unsigned char *iv, int enc); ++ const unsigned char *iv, int enc); + static int cryptodev_cleanup(EVP_CIPHER_CTX *ctx); + static int cryptodev_engine_ciphers(ENGINE *e, const EVP_CIPHER **cipher, +- const int **nids, int nid); ++ const int **nids, int nid); + static int cryptodev_engine_digests(ENGINE *e, const EVP_MD **digest, +- const int **nids, int nid); ++ const int **nids, int nid); + static int bn2crparam(const BIGNUM *a, struct crparam *crp); + static int crparam2bn(struct crparam *crp, BIGNUM *a); + static void zapparams(struct crypt_kop *kop); + static int cryptodev_asym(struct crypt_kop *kop, int rlen, BIGNUM *r, +- int slen, BIGNUM *s); ++ int slen, BIGNUM *s); + + static int cryptodev_bn_mod_exp(BIGNUM *r, const BIGNUM *a, +- const BIGNUM *p, const BIGNUM *m, BN_CTX *ctx, +- BN_MONT_CTX *m_ctx); +-static int cryptodev_rsa_nocrt_mod_exp(BIGNUM *r0, const BIGNUM *I, RSA *rsa, +- BN_CTX *ctx); +-static int cryptodev_rsa_mod_exp(BIGNUM *r0, const BIGNUM *I, RSA *rsa, +- BN_CTX *ctx); ++ const BIGNUM *p, const BIGNUM *m, BN_CTX *ctx, BN_MONT_CTX *m_ctx); ++static int cryptodev_rsa_nocrt_mod_exp(BIGNUM *r0, const BIGNUM *I, ++ RSA *rsa, BN_CTX *ctx); ++static int cryptodev_rsa_mod_exp(BIGNUM *r0, const BIGNUM *I, RSA *rsa, BN_CTX *ctx); + static int cryptodev_dsa_bn_mod_exp(DSA *dsa, BIGNUM *r, BIGNUM *a, +- const BIGNUM *p, const BIGNUM *m, +- BN_CTX *ctx, BN_MONT_CTX *m_ctx); ++ const BIGNUM *p, const BIGNUM *m, BN_CTX *ctx, BN_MONT_CTX *m_ctx); + static int cryptodev_dsa_dsa_mod_exp(DSA *dsa, BIGNUM *t1, BIGNUM *g, +- BIGNUM *u1, BIGNUM *pub_key, BIGNUM *u2, +- BIGNUM *p, BN_CTX *ctx, +- BN_MONT_CTX *mont); +-static DSA_SIG *cryptodev_dsa_do_sign(const unsigned char *dgst, int dlen, +- DSA *dsa); ++ BIGNUM *u1, BIGNUM *pub_key, BIGNUM *u2, BIGNUM *p, ++ BN_CTX *ctx, BN_MONT_CTX *mont); ++static DSA_SIG *cryptodev_dsa_do_sign(const unsigned char *dgst, ++ int dlen, DSA *dsa); + static int cryptodev_dsa_verify(const unsigned char *dgst, int dgst_len, +- DSA_SIG *sig, DSA *dsa); ++ DSA_SIG *sig, DSA *dsa); + static int cryptodev_mod_exp_dh(const DH *dh, BIGNUM *r, const BIGNUM *a, +- const BIGNUM *p, const BIGNUM *m, BN_CTX *ctx, +- BN_MONT_CTX *m_ctx); +-static int cryptodev_dh_compute_key(unsigned char *key, const BIGNUM *pub_key, +- DH *dh); ++ const BIGNUM *p, const BIGNUM *m, BN_CTX *ctx, ++ BN_MONT_CTX *m_ctx); ++static int cryptodev_dh_compute_key(unsigned char *key, ++ const BIGNUM *pub_key, DH *dh); + static int cryptodev_ctrl(ENGINE *e, int cmd, long i, void *p, +- void (*f) (void)); ++ void (*f)(void)); + void ENGINE_load_cryptodev(void); + + static const ENGINE_CMD_DEFN cryptodev_defns[] = { +- {0, NULL, NULL, 0} ++ { 0, NULL, NULL, 0 } + }; + + static struct { +- int id; +- int nid; +- int ivmax; +- int keylen; ++ int id; ++ int nid; ++ int ivmax; ++ int keylen; + } ciphers[] = { +- { +- CRYPTO_ARC4, NID_rc4, 0, 16, +- }, +- { +- CRYPTO_DES_CBC, NID_des_cbc, 8, 8, +- }, +- { +- CRYPTO_3DES_CBC, NID_des_ede3_cbc, 8, 24, +- }, +- { +- CRYPTO_AES_CBC, NID_aes_128_cbc, 16, 16, +- }, +- { +- CRYPTO_AES_CBC, NID_aes_192_cbc, 16, 24, +- }, +- { +- CRYPTO_AES_CBC, NID_aes_256_cbc, 16, 32, +- }, +-# ifdef CRYPTO_AES_CTR +- { +- CRYPTO_AES_CTR, NID_aes_128_ctr, 14, 16, +- }, +- { +- CRYPTO_AES_CTR, NID_aes_192_ctr, 14, 24, +- }, +- { +- CRYPTO_AES_CTR, NID_aes_256_ctr, 14, 32, +- }, +-# endif +- { +- CRYPTO_BLF_CBC, NID_bf_cbc, 8, 16, +- }, +- { +- CRYPTO_CAST_CBC, NID_cast5_cbc, 8, 16, +- }, +- { +- CRYPTO_SKIPJACK_CBC, NID_undef, 0, 0, +- }, +- { +- 0, NID_undef, 0, 0, +- }, ++ { CRYPTO_ARC4, NID_rc4, 0, 16, }, ++ { CRYPTO_DES_CBC, NID_des_cbc, 8, 8, }, ++ { CRYPTO_3DES_CBC, NID_des_ede3_cbc, 8, 24, }, ++ { CRYPTO_AES_CBC, NID_aes_128_cbc, 16, 16, }, ++ { CRYPTO_AES_CBC, NID_aes_192_cbc, 16, 24, }, ++ { CRYPTO_AES_CBC, NID_aes_256_cbc, 16, 32, }, ++#ifdef CRYPTO_AES_CTR ++ { CRYPTO_AES_CTR, NID_aes_128_ctr, 14, 16, }, ++ { CRYPTO_AES_CTR, NID_aes_192_ctr, 14, 24, }, ++ { CRYPTO_AES_CTR, NID_aes_256_ctr, 14, 32, }, ++#endif ++ { CRYPTO_BLF_CBC, NID_bf_cbc, 8, 16, }, ++ { CRYPTO_CAST_CBC, NID_cast5_cbc, 8, 16, }, ++ { CRYPTO_SKIPJACK_CBC, NID_undef, 0, 0, }, ++ { 0, NID_undef, 0, 0, }, + }; + +-# ifdef USE_CRYPTODEV_DIGESTS ++#ifdef USE_CRYPTODEV_DIGESTS + static struct { +- int id; +- int nid; +- int keylen; ++ int id; ++ int nid; ++ int digestlen; + } digests[] = { +- { +- CRYPTO_MD5_HMAC, NID_hmacWithMD5, 16 +- }, +- { +- CRYPTO_SHA1_HMAC, NID_hmacWithSHA1, 20 +- }, +- { +- CRYPTO_RIPEMD160_HMAC, NID_ripemd160, 16 +- /* ? */ +- }, +- { +- CRYPTO_MD5_KPDK, NID_undef, 0 +- }, +- { +- CRYPTO_SHA1_KPDK, NID_undef, 0 +- }, +- { +- CRYPTO_MD5, NID_md5, 16 +- }, +- { +- CRYPTO_SHA1, NID_sha1, 20 +- }, +- { +- 0, NID_undef, 0 +- }, ++#if 0 ++ /* HMAC is not supported */ ++ { CRYPTO_MD5_HMAC, NID_hmacWithMD5, 16}, ++ { CRYPTO_SHA1_HMAC, NID_hmacWithSHA1, 20}, ++ { CRYPTO_SHA2_256_HMAC, NID_hmacWithSHA256, 32}, ++ { CRYPTO_SHA2_384_HMAC, NID_hmacWithSHA384, 48}, ++ { CRYPTO_SHA2_512_HMAC, NID_hmacWithSHA512, 64}, ++#endif ++ { CRYPTO_MD5, NID_md5, 16}, ++ { CRYPTO_SHA1, NID_sha1, 20}, ++ { CRYPTO_SHA2_256, NID_sha256, 32}, ++ { CRYPTO_SHA2_384, NID_sha384, 48}, ++ { CRYPTO_SHA2_512, NID_sha512, 64}, ++ { 0, NID_undef, 0}, + }; +-# endif ++#endif + + /* + * Return a fd if /dev/crypto seems usable, 0 otherwise. + */ +-static int open_dev_crypto(void) ++static int ++open_dev_crypto(void) + { +- static int fd = -1; ++ static int fd = -1; + +- if (fd == -1) { +- if ((fd = open("/dev/crypto", O_RDWR, 0)) == -1) +- return (-1); +- /* close on exec */ +- if (fcntl(fd, F_SETFD, 1) == -1) { +- close(fd); +- fd = -1; +- return (-1); +- } +- } +- return (fd); +-} +- +-static int get_dev_crypto(void) +-{ +- int fd, retfd; +- +- if ((fd = open_dev_crypto()) == -1) +- return (-1); +-# ifndef CRIOGET_NOT_NEEDED +- if (ioctl(fd, CRIOGET, &retfd) == -1) +- return (-1); +- +- /* close on exec */ +- if (fcntl(retfd, F_SETFD, 1) == -1) { +- close(retfd); +- return (-1); +- } +-# else +- retfd = fd; +-# endif +- return (retfd); ++ if (fd == -1) { ++ if ((fd = open("/dev/crypto", O_RDWR, 0)) == -1) ++ return (-1); ++ /* close on exec */ ++ if (fcntl(fd, F_SETFD, 1) == -1) { ++ close(fd); ++ fd = -1; ++ return (-1); ++ } ++ } ++ return (fd); ++} ++ ++static int ++get_dev_crypto(void) ++{ ++ int fd, retfd; ++ ++ if ((fd = open_dev_crypto()) == -1) ++ return (-1); ++#ifndef CRIOGET_NOT_NEEDED ++ if (ioctl(fd, CRIOGET, &retfd) == -1) ++ return (-1); ++ ++ /* close on exec */ ++ if (fcntl(retfd, F_SETFD, 1) == -1) { ++ close(retfd); ++ return (-1); ++ } ++#else ++ retfd = fd; ++#endif ++ return (retfd); + } + + static void put_dev_crypto(int fd) + { +-# ifndef CRIOGET_NOT_NEEDED +- close(fd); +-# endif ++#ifndef CRIOGET_NOT_NEEDED ++ close(fd); ++#endif + } + + /* Caching version for asym operations */ +-static int get_asym_dev_crypto(void) ++static int ++get_asym_dev_crypto(void) + { +- static int fd = -1; ++ static int fd = -1; + +- if (fd == -1) +- fd = get_dev_crypto(); +- return fd; ++ if (fd == -1) ++ fd = get_dev_crypto(); ++ return fd; + } + + /* +@@ -284,76 +247,80 @@ + * returning them here is harmless, as long as we return NULL + * when asked for a handler in the cryptodev_engine_ciphers routine + */ +-static int get_cryptodev_ciphers(const int **cnids) ++static int ++get_cryptodev_ciphers(const int **cnids) + { +- static int nids[CRYPTO_ALGORITHM_MAX]; +- struct session_op sess; +- int fd, i, count = 0; +- +- if ((fd = get_dev_crypto()) < 0) { +- *cnids = NULL; +- return (0); +- } +- memset(&sess, 0, sizeof(sess)); +- sess.key = (caddr_t) "123456789abcdefghijklmno"; +- +- for (i = 0; ciphers[i].id && count < CRYPTO_ALGORITHM_MAX; i++) { +- if (ciphers[i].nid == NID_undef) +- continue; +- sess.cipher = ciphers[i].id; +- sess.keylen = ciphers[i].keylen; +- sess.mac = 0; +- if (ioctl(fd, CIOCGSESSION, &sess) != -1 && +- ioctl(fd, CIOCFSESSION, &sess.ses) != -1) +- nids[count++] = ciphers[i].nid; +- } +- put_dev_crypto(fd); +- +- if (count > 0) +- *cnids = nids; +- else +- *cnids = NULL; +- return (count); ++ static int nids[CRYPTO_ALGORITHM_MAX]; ++ struct session_op sess; ++ int fd, i, count = 0; ++ unsigned char fake_key[CRYPTO_CIPHER_MAX_KEY_LEN]; ++ ++ if ((fd = get_dev_crypto()) < 0) { ++ *cnids = NULL; ++ return (0); ++ } ++ memset(&sess, 0, sizeof(sess)); ++ sess.key = (void*)fake_key; ++ ++ for (i = 0; ciphers[i].id && count < CRYPTO_ALGORITHM_MAX; i++) { ++ if (ciphers[i].nid == NID_undef) ++ continue; ++ sess.cipher = ciphers[i].id; ++ sess.keylen = ciphers[i].keylen; ++ sess.mac = 0; ++ if (ioctl(fd, CIOCGSESSION, &sess) != -1 && ++ ioctl(fd, CIOCFSESSION, &sess.ses) != -1) ++ nids[count++] = ciphers[i].nid; ++ } ++ put_dev_crypto(fd); ++ ++ if (count > 0) ++ *cnids = nids; ++ else ++ *cnids = NULL; ++ return (count); + } + +-# ifdef USE_CRYPTODEV_DIGESTS ++#ifdef USE_CRYPTODEV_DIGESTS + /* + * Find out what digests /dev/crypto will let us have a session for. + * XXX note, that some of these openssl doesn't deal with yet! + * returning them here is harmless, as long as we return NULL + * when asked for a handler in the cryptodev_engine_digests routine + */ +-static int get_cryptodev_digests(const int **cnids) ++static int ++get_cryptodev_digests(const int **cnids) + { +- static int nids[CRYPTO_ALGORITHM_MAX]; +- struct session_op sess; +- int fd, i, count = 0; +- +- if ((fd = get_dev_crypto()) < 0) { +- *cnids = NULL; +- return (0); +- } +- memset(&sess, 0, sizeof(sess)); +- sess.mackey = (caddr_t) "123456789abcdefghijklmno"; +- for (i = 0; digests[i].id && count < CRYPTO_ALGORITHM_MAX; i++) { +- if (digests[i].nid == NID_undef) +- continue; +- sess.mac = digests[i].id; +- sess.mackeylen = digests[i].keylen; +- sess.cipher = 0; +- if (ioctl(fd, CIOCGSESSION, &sess) != -1 && +- ioctl(fd, CIOCFSESSION, &sess.ses) != -1) +- nids[count++] = digests[i].nid; +- } +- put_dev_crypto(fd); +- +- if (count > 0) +- *cnids = nids; +- else +- *cnids = NULL; +- return (count); ++ static int nids[CRYPTO_ALGORITHM_MAX]; ++ unsigned char fake_key[CRYPTO_CIPHER_MAX_KEY_LEN]; ++ struct session_op sess; ++ int fd, i, count = 0; ++ ++ if ((fd = get_dev_crypto()) < 0) { ++ *cnids = NULL; ++ return (0); ++ } ++ memset(&sess, 0, sizeof(sess)); ++ sess.mackey = fake_key; ++ for (i = 0; digests[i].id && count < CRYPTO_ALGORITHM_MAX; i++) { ++ if (digests[i].nid == NID_undef) ++ continue; ++ sess.mac = digests[i].id; ++ sess.mackeylen = 8; ++ sess.cipher = 0; ++ if (ioctl(fd, CIOCGSESSION, &sess) != -1 && ++ ioctl(fd, CIOCFSESSION, &sess.ses) != -1) ++ nids[count++] = digests[i].nid; ++ } ++ put_dev_crypto(fd); ++ ++ if (count > 0) ++ *cnids = nids; ++ else ++ *cnids = NULL; ++ return (count); + } +-# endif /* 0 */ ++#endif /* 0 */ + + /* + * Find the useable ciphers|digests from dev/crypto - this is the first +@@ -376,158 +343,161 @@ + * want most of the decisions made about what we actually want + * to use from /dev/crypto. + */ +-static int cryptodev_usable_ciphers(const int **nids) ++static int ++cryptodev_usable_ciphers(const int **nids) + { +- return (get_cryptodev_ciphers(nids)); ++ return (get_cryptodev_ciphers(nids)); + } + +-static int cryptodev_usable_digests(const int **nids) ++static int ++cryptodev_usable_digests(const int **nids) + { +-# ifdef USE_CRYPTODEV_DIGESTS +- return (get_cryptodev_digests(nids)); +-# else +- /* +- * XXXX just disable all digests for now, because it sucks. +- * we need a better way to decide this - i.e. I may not +- * want digests on slow cards like hifn on fast machines, +- * but might want them on slow or loaded machines, etc. +- * will also want them when using crypto cards that don't +- * suck moose gonads - would be nice to be able to decide something +- * as reasonable default without having hackery that's card dependent. +- * of course, the default should probably be just do everything, +- * with perhaps a sysctl to turn algoritms off (or have them off +- * by default) on cards that generally suck like the hifn. +- */ +- *nids = NULL; +- return (0); +-# endif ++#ifdef USE_CRYPTODEV_DIGESTS ++ return (get_cryptodev_digests(nids)); ++#else ++ /* ++ * XXXX just disable all digests for now, because it sucks. ++ * we need a better way to decide this - i.e. I may not ++ * want digests on slow cards like hifn on fast machines, ++ * but might want them on slow or loaded machines, etc. ++ * will also want them when using crypto cards that don't ++ * suck moose gonads - would be nice to be able to decide something ++ * as reasonable default without having hackery that's card dependent. ++ * of course, the default should probably be just do everything, ++ * with perhaps a sysctl to turn algoritms off (or have them off ++ * by default) on cards that generally suck like the hifn. ++ */ ++ *nids = NULL; ++ return (0); ++#endif + } + + static int + cryptodev_cipher(EVP_CIPHER_CTX *ctx, unsigned char *out, +- const unsigned char *in, size_t inl) ++ const unsigned char *in, size_t inl) + { +- struct crypt_op cryp; +- struct dev_crypto_state *state = ctx->cipher_data; +- struct session_op *sess = &state->d_sess; +- const void *iiv; +- unsigned char save_iv[EVP_MAX_IV_LENGTH]; +- +- if (state->d_fd < 0) +- return (0); +- if (!inl) +- return (1); +- if ((inl % ctx->cipher->block_size) != 0) +- return (0); +- +- memset(&cryp, 0, sizeof(cryp)); +- +- cryp.ses = sess->ses; +- cryp.flags = 0; +- cryp.len = inl; +- cryp.src = (caddr_t) in; +- cryp.dst = (caddr_t) out; +- cryp.mac = 0; +- +- cryp.op = ctx->encrypt ? COP_ENCRYPT : COP_DECRYPT; +- +- if (ctx->cipher->iv_len) { +- cryp.iv = (caddr_t) ctx->iv; +- if (!ctx->encrypt) { +- iiv = in + inl - ctx->cipher->iv_len; +- memcpy(save_iv, iiv, ctx->cipher->iv_len); +- } +- } else +- cryp.iv = NULL; +- +- if (ioctl(state->d_fd, CIOCCRYPT, &cryp) == -1) { +- /* +- * XXX need better errror handling this can fail for a number of +- * different reasons. +- */ +- return (0); +- } +- +- if (ctx->cipher->iv_len) { +- if (ctx->encrypt) +- iiv = out + inl - ctx->cipher->iv_len; +- else +- iiv = save_iv; +- memcpy(ctx->iv, iiv, ctx->cipher->iv_len); +- } +- return (1); ++ struct crypt_op cryp; ++ struct dev_crypto_state *state = ctx->cipher_data; ++ struct session_op *sess = &state->d_sess; ++ const void *iiv; ++ unsigned char save_iv[EVP_MAX_IV_LENGTH]; ++ ++ if (state->d_fd < 0) ++ return (0); ++ if (!inl) ++ return (1); ++ if ((inl % ctx->cipher->block_size) != 0) ++ return (0); ++ ++ memset(&cryp, 0, sizeof(cryp)); ++ ++ cryp.ses = sess->ses; ++ cryp.flags = 0; ++ cryp.len = inl; ++ cryp.src = (void*) in; ++ cryp.dst = (void*) out; ++ cryp.mac = 0; ++ ++ cryp.op = ctx->encrypt ? COP_ENCRYPT : COP_DECRYPT; ++ ++ if (ctx->cipher->iv_len) { ++ cryp.iv = (void*) ctx->iv; ++ if (!ctx->encrypt) { ++ iiv = in + inl - ctx->cipher->iv_len; ++ memcpy(save_iv, iiv, ctx->cipher->iv_len); ++ } ++ } else ++ cryp.iv = NULL; ++ ++ if (ioctl(state->d_fd, CIOCCRYPT, &cryp) == -1) { ++ /* XXX need better errror handling ++ * this can fail for a number of different reasons. ++ */ ++ return (0); ++ } ++ ++ if (ctx->cipher->iv_len) { ++ if (ctx->encrypt) ++ iiv = out + inl - ctx->cipher->iv_len; ++ else ++ iiv = save_iv; ++ memcpy(ctx->iv, iiv, ctx->cipher->iv_len); ++ } ++ return (1); + } + + static int + cryptodev_init_key(EVP_CIPHER_CTX *ctx, const unsigned char *key, +- const unsigned char *iv, int enc) ++ const unsigned char *iv, int enc) + { +- struct dev_crypto_state *state = ctx->cipher_data; +- struct session_op *sess = &state->d_sess; +- int cipher = -1, i; +- +- for (i = 0; ciphers[i].id; i++) +- if (ctx->cipher->nid == ciphers[i].nid && +- ctx->cipher->iv_len <= ciphers[i].ivmax && +- ctx->key_len == ciphers[i].keylen) { +- cipher = ciphers[i].id; +- break; +- } +- +- if (!ciphers[i].id) { +- state->d_fd = -1; +- return (0); +- } +- +- memset(sess, 0, sizeof(struct session_op)); +- +- if ((state->d_fd = get_dev_crypto()) < 0) +- return (0); +- +- sess->key = (caddr_t) key; +- sess->keylen = ctx->key_len; +- sess->cipher = cipher; +- +- if (ioctl(state->d_fd, CIOCGSESSION, sess) == -1) { +- put_dev_crypto(state->d_fd); +- state->d_fd = -1; +- return (0); +- } +- return (1); ++ struct dev_crypto_state *state = ctx->cipher_data; ++ struct session_op *sess = &state->d_sess; ++ int cipher = -1, i; ++ ++ for (i = 0; ciphers[i].id; i++) ++ if (ctx->cipher->nid == ciphers[i].nid && ++ ctx->cipher->iv_len <= ciphers[i].ivmax && ++ ctx->key_len == ciphers[i].keylen) { ++ cipher = ciphers[i].id; ++ break; ++ } ++ ++ if (!ciphers[i].id) { ++ state->d_fd = -1; ++ return (0); ++ } ++ ++ memset(sess, 0, sizeof(struct session_op)); ++ ++ if ((state->d_fd = get_dev_crypto()) < 0) ++ return (0); ++ ++ sess->key = (void*)key; ++ sess->keylen = ctx->key_len; ++ sess->cipher = cipher; ++ ++ if (ioctl(state->d_fd, CIOCGSESSION, sess) == -1) { ++ put_dev_crypto(state->d_fd); ++ state->d_fd = -1; ++ return (0); ++ } ++ return (1); + } + + /* + * free anything we allocated earlier when initting a + * session, and close the session. + */ +-static int cryptodev_cleanup(EVP_CIPHER_CTX *ctx) ++static int ++cryptodev_cleanup(EVP_CIPHER_CTX *ctx) + { +- int ret = 0; +- struct dev_crypto_state *state = ctx->cipher_data; +- struct session_op *sess = &state->d_sess; +- +- if (state->d_fd < 0) +- return (0); +- +- /* +- * XXX if this ioctl fails, someting's wrong. the invoker may have called +- * us with a bogus ctx, or we could have a device that for whatever +- * reason just doesn't want to play ball - it's not clear what's right +- * here - should this be an error? should it just increase a counter, +- * hmm. For right now, we return 0 - I don't believe that to be "right". +- * we could call the gorpy openssl lib error handlers that print messages +- * to users of the library. hmm.. +- */ +- +- if (ioctl(state->d_fd, CIOCFSESSION, &sess->ses) == -1) { +- ret = 0; +- } else { +- ret = 1; +- } +- put_dev_crypto(state->d_fd); +- state->d_fd = -1; ++ int ret = 0; ++ struct dev_crypto_state *state = ctx->cipher_data; ++ struct session_op *sess = &state->d_sess; ++ ++ if (state->d_fd < 0) ++ return (0); ++ ++ /* XXX if this ioctl fails, someting's wrong. the invoker ++ * may have called us with a bogus ctx, or we could ++ * have a device that for whatever reason just doesn't ++ * want to play ball - it's not clear what's right ++ * here - should this be an error? should it just ++ * increase a counter, hmm. For right now, we return ++ * 0 - I don't believe that to be "right". we could ++ * call the gorpy openssl lib error handlers that ++ * print messages to users of the library. hmm.. ++ */ ++ ++ if (ioctl(state->d_fd, CIOCFSESSION, &sess->ses) == -1) { ++ ret = 0; ++ } else { ++ ret = 1; ++ } ++ put_dev_crypto(state->d_fd); ++ state->d_fd = -1; + +- return (ret); ++ return (ret); + } + + /* +@@ -535,153 +505,165 @@ + * gets called when libcrypto requests a cipher NID. + */ + ++static int cryptodev_cipher_ctrl(EVP_CIPHER_CTX *ctx, int type, int p1, void *p2) ++{ ++ struct dev_crypto_state *state = ctx->cipher_data; ++ struct session_op *sess = &state->d_sess; ++ ++ if (type == EVP_CTRL_COPY) { ++ EVP_CIPHER_CTX *out = p2; ++ return cryptodev_init_key(out, sess->key, ctx->iv, 0); ++ } ++ ++ return 0; ++} ++ + /* RC4 */ + const EVP_CIPHER cryptodev_rc4 = { +- NID_rc4, +- 1, 16, 0, +- EVP_CIPH_VARIABLE_LENGTH, +- cryptodev_init_key, +- cryptodev_cipher, +- cryptodev_cleanup, +- sizeof(struct dev_crypto_state), +- NULL, +- NULL, +- NULL ++ NID_rc4, ++ 1, 16, 0, ++ EVP_CIPH_VARIABLE_LENGTH|EVP_CIPH_CUSTOM_COPY, ++ cryptodev_init_key, ++ cryptodev_cipher, ++ cryptodev_cleanup, ++ sizeof(struct dev_crypto_state), ++ NULL, ++ NULL, ++ cryptodev_cipher_ctrl + }; + + /* DES CBC EVP */ + const EVP_CIPHER cryptodev_des_cbc = { +- NID_des_cbc, +- 8, 8, 8, +- EVP_CIPH_CBC_MODE, +- cryptodev_init_key, +- cryptodev_cipher, +- cryptodev_cleanup, +- sizeof(struct dev_crypto_state), +- EVP_CIPHER_set_asn1_iv, +- EVP_CIPHER_get_asn1_iv, +- NULL ++ NID_des_cbc, ++ 8, 8, 8, ++ EVP_CIPH_CBC_MODE|EVP_CIPH_CUSTOM_COPY, ++ cryptodev_init_key, ++ cryptodev_cipher, ++ cryptodev_cleanup, ++ sizeof(struct dev_crypto_state), ++ EVP_CIPHER_set_asn1_iv, ++ EVP_CIPHER_get_asn1_iv, ++ cryptodev_cipher_ctrl + }; + + /* 3DES CBC EVP */ + const EVP_CIPHER cryptodev_3des_cbc = { +- NID_des_ede3_cbc, +- 8, 24, 8, +- EVP_CIPH_CBC_MODE, +- cryptodev_init_key, +- cryptodev_cipher, +- cryptodev_cleanup, +- sizeof(struct dev_crypto_state), +- EVP_CIPHER_set_asn1_iv, +- EVP_CIPHER_get_asn1_iv, +- NULL ++ NID_des_ede3_cbc, ++ 8, 24, 8, ++ EVP_CIPH_CBC_MODE|EVP_CIPH_CUSTOM_COPY, ++ cryptodev_init_key, ++ cryptodev_cipher, ++ cryptodev_cleanup, ++ sizeof(struct dev_crypto_state), ++ EVP_CIPHER_set_asn1_iv, ++ EVP_CIPHER_get_asn1_iv, ++ cryptodev_cipher_ctrl + }; + + const EVP_CIPHER cryptodev_bf_cbc = { +- NID_bf_cbc, +- 8, 16, 8, +- EVP_CIPH_CBC_MODE, +- cryptodev_init_key, +- cryptodev_cipher, +- cryptodev_cleanup, +- sizeof(struct dev_crypto_state), +- EVP_CIPHER_set_asn1_iv, +- EVP_CIPHER_get_asn1_iv, +- NULL ++ NID_bf_cbc, ++ 8, 16, 8, ++ EVP_CIPH_CBC_MODE|EVP_CIPH_CUSTOM_COPY, ++ cryptodev_init_key, ++ cryptodev_cipher, ++ cryptodev_cleanup, ++ sizeof(struct dev_crypto_state), ++ EVP_CIPHER_set_asn1_iv, ++ EVP_CIPHER_get_asn1_iv, ++ cryptodev_cipher_ctrl + }; + + const EVP_CIPHER cryptodev_cast_cbc = { +- NID_cast5_cbc, +- 8, 16, 8, +- EVP_CIPH_CBC_MODE, +- cryptodev_init_key, +- cryptodev_cipher, +- cryptodev_cleanup, +- sizeof(struct dev_crypto_state), +- EVP_CIPHER_set_asn1_iv, +- EVP_CIPHER_get_asn1_iv, +- NULL ++ NID_cast5_cbc, ++ 8, 16, 8, ++ EVP_CIPH_CBC_MODE|EVP_CIPH_CUSTOM_COPY, ++ cryptodev_init_key, ++ cryptodev_cipher, ++ cryptodev_cleanup, ++ sizeof(struct dev_crypto_state), ++ EVP_CIPHER_set_asn1_iv, ++ EVP_CIPHER_get_asn1_iv, ++ cryptodev_cipher_ctrl + }; + + const EVP_CIPHER cryptodev_aes_cbc = { +- NID_aes_128_cbc, +- 16, 16, 16, +- EVP_CIPH_CBC_MODE, +- cryptodev_init_key, +- cryptodev_cipher, +- cryptodev_cleanup, +- sizeof(struct dev_crypto_state), +- EVP_CIPHER_set_asn1_iv, +- EVP_CIPHER_get_asn1_iv, +- NULL ++ NID_aes_128_cbc, ++ 16, 16, 16, ++ EVP_CIPH_CBC_MODE|EVP_CIPH_CUSTOM_COPY, ++ cryptodev_init_key, ++ cryptodev_cipher, ++ cryptodev_cleanup, ++ sizeof(struct dev_crypto_state), ++ EVP_CIPHER_set_asn1_iv, ++ EVP_CIPHER_get_asn1_iv, ++ cryptodev_cipher_ctrl + }; + + const EVP_CIPHER cryptodev_aes_192_cbc = { +- NID_aes_192_cbc, +- 16, 24, 16, +- EVP_CIPH_CBC_MODE, +- cryptodev_init_key, +- cryptodev_cipher, +- cryptodev_cleanup, +- sizeof(struct dev_crypto_state), +- EVP_CIPHER_set_asn1_iv, +- EVP_CIPHER_get_asn1_iv, +- NULL ++ NID_aes_192_cbc, ++ 16, 24, 16, ++ EVP_CIPH_CBC_MODE|EVP_CIPH_CUSTOM_COPY, ++ cryptodev_init_key, ++ cryptodev_cipher, ++ cryptodev_cleanup, ++ sizeof(struct dev_crypto_state), ++ EVP_CIPHER_set_asn1_iv, ++ EVP_CIPHER_get_asn1_iv, ++ cryptodev_cipher_ctrl + }; + + const EVP_CIPHER cryptodev_aes_256_cbc = { +- NID_aes_256_cbc, +- 16, 32, 16, +- EVP_CIPH_CBC_MODE, +- cryptodev_init_key, +- cryptodev_cipher, +- cryptodev_cleanup, +- sizeof(struct dev_crypto_state), +- EVP_CIPHER_set_asn1_iv, +- EVP_CIPHER_get_asn1_iv, +- NULL ++ NID_aes_256_cbc, ++ 16, 32, 16, ++ EVP_CIPH_CBC_MODE|EVP_CIPH_CUSTOM_COPY, ++ cryptodev_init_key, ++ cryptodev_cipher, ++ cryptodev_cleanup, ++ sizeof(struct dev_crypto_state), ++ EVP_CIPHER_set_asn1_iv, ++ EVP_CIPHER_get_asn1_iv, ++ cryptodev_cipher_ctrl + }; +- +-# ifdef CRYPTO_AES_CTR ++#ifdef CRYPTO_AES_CTR + const EVP_CIPHER cryptodev_aes_ctr = { +- NID_aes_128_ctr, +- 16, 16, 14, +- EVP_CIPH_CTR_MODE, +- cryptodev_init_key, +- cryptodev_cipher, +- cryptodev_cleanup, +- sizeof(struct dev_crypto_state), +- EVP_CIPHER_set_asn1_iv, +- EVP_CIPHER_get_asn1_iv, +- NULL ++ NID_aes_128_ctr, ++ 16, 16, 14, ++ EVP_CIPH_CTR_MODE|EVP_CIPH_CUSTOM_COPY, ++ cryptodev_init_key, ++ cryptodev_cipher, ++ cryptodev_cleanup, ++ sizeof(struct dev_crypto_state), ++ EVP_CIPHER_set_asn1_iv, ++ EVP_CIPHER_get_asn1_iv, ++ cryptodev_cipher_ctrl + }; + + const EVP_CIPHER cryptodev_aes_ctr_192 = { +- NID_aes_192_ctr, +- 16, 24, 14, +- EVP_CIPH_CTR_MODE, +- cryptodev_init_key, +- cryptodev_cipher, +- cryptodev_cleanup, +- sizeof(struct dev_crypto_state), +- EVP_CIPHER_set_asn1_iv, +- EVP_CIPHER_get_asn1_iv, +- NULL ++ NID_aes_192_ctr, ++ 16, 24, 14, ++ EVP_CIPH_CTR_MODE|EVP_CIPH_CUSTOM_COPY, ++ cryptodev_init_key, ++ cryptodev_cipher, ++ cryptodev_cleanup, ++ sizeof(struct dev_crypto_state), ++ EVP_CIPHER_set_asn1_iv, ++ EVP_CIPHER_get_asn1_iv, ++ cryptodev_cipher_ctrl + }; + + const EVP_CIPHER cryptodev_aes_ctr_256 = { +- NID_aes_256_ctr, +- 16, 32, 14, +- EVP_CIPH_CTR_MODE, +- cryptodev_init_key, +- cryptodev_cipher, +- cryptodev_cleanup, +- sizeof(struct dev_crypto_state), +- EVP_CIPHER_set_asn1_iv, +- EVP_CIPHER_get_asn1_iv, +- NULL ++ NID_aes_256_ctr, ++ 16, 32, 14, ++ EVP_CIPH_CTR_MODE|EVP_CIPH_CUSTOM_COPY, ++ cryptodev_init_key, ++ cryptodev_cipher, ++ cryptodev_cleanup, ++ sizeof(struct dev_crypto_state), ++ EVP_CIPHER_set_asn1_iv, ++ EVP_CIPHER_get_asn1_iv, ++ cryptodev_cipher_ctrl + }; +-# endif ++#endif + /* + * Registered by the ENGINE when used to find out how to deal with + * a particular NID in the ENGINE. this says what we'll do at the +@@ -689,321 +671,412 @@ + */ + static int + cryptodev_engine_ciphers(ENGINE *e, const EVP_CIPHER **cipher, +- const int **nids, int nid) ++ const int **nids, int nid) + { +- if (!cipher) +- return (cryptodev_usable_ciphers(nids)); ++ if (!cipher) ++ return (cryptodev_usable_ciphers(nids)); + +- switch (nid) { +- case NID_rc4: +- *cipher = &cryptodev_rc4; +- break; +- case NID_des_ede3_cbc: +- *cipher = &cryptodev_3des_cbc; +- break; +- case NID_des_cbc: +- *cipher = &cryptodev_des_cbc; +- break; +- case NID_bf_cbc: +- *cipher = &cryptodev_bf_cbc; +- break; +- case NID_cast5_cbc: +- *cipher = &cryptodev_cast_cbc; +- break; +- case NID_aes_128_cbc: +- *cipher = &cryptodev_aes_cbc; +- break; +- case NID_aes_192_cbc: +- *cipher = &cryptodev_aes_192_cbc; +- break; +- case NID_aes_256_cbc: +- *cipher = &cryptodev_aes_256_cbc; +- break; +-# ifdef CRYPTO_AES_CTR +- case NID_aes_128_ctr: +- *cipher = &cryptodev_aes_ctr; +- break; +- case NID_aes_192_ctr: +- *cipher = &cryptodev_aes_ctr_192; +- break; +- case NID_aes_256_ctr: +- *cipher = &cryptodev_aes_ctr_256; +- break; +-# endif +- default: +- *cipher = NULL; +- break; +- } +- return (*cipher != NULL); ++ switch (nid) { ++ case NID_rc4: ++ *cipher = &cryptodev_rc4; ++ break; ++ case NID_des_ede3_cbc: ++ *cipher = &cryptodev_3des_cbc; ++ break; ++ case NID_des_cbc: ++ *cipher = &cryptodev_des_cbc; ++ break; ++ case NID_bf_cbc: ++ *cipher = &cryptodev_bf_cbc; ++ break; ++ case NID_cast5_cbc: ++ *cipher = &cryptodev_cast_cbc; ++ break; ++ case NID_aes_128_cbc: ++ *cipher = &cryptodev_aes_cbc; ++ break; ++ case NID_aes_192_cbc: ++ *cipher = &cryptodev_aes_192_cbc; ++ break; ++ case NID_aes_256_cbc: ++ *cipher = &cryptodev_aes_256_cbc; ++ break; ++#ifdef CRYPTO_AES_CTR ++ case NID_aes_128_ctr: ++ *cipher = &cryptodev_aes_ctr; ++ break; ++ case NID_aes_192_ctr: ++ *cipher = &cryptodev_aes_ctr_192; ++ break; ++ case NID_aes_256_ctr: ++ *cipher = &cryptodev_aes_ctr_256; ++ break; ++#endif ++ default: ++ *cipher = NULL; ++ break; ++ } ++ return (*cipher != NULL); + } + +-# ifdef USE_CRYPTODEV_DIGESTS ++ ++#ifdef USE_CRYPTODEV_DIGESTS + + /* convert digest type to cryptodev */ +-static int digest_nid_to_cryptodev(int nid) ++static int ++digest_nid_to_cryptodev(int nid) + { +- int i; ++ int i; + +- for (i = 0; digests[i].id; i++) +- if (digests[i].nid == nid) +- return (digests[i].id); +- return (0); ++ for (i = 0; digests[i].id; i++) ++ if (digests[i].nid == nid) ++ return (digests[i].id); ++ return (0); + } + +-static int digest_key_length(int nid) +-{ +- int i; +- +- for (i = 0; digests[i].id; i++) +- if (digests[i].nid == nid) +- return digests[i].keylen; +- return (0); +-} + + static int cryptodev_digest_init(EVP_MD_CTX *ctx) + { +- struct dev_crypto_state *state = ctx->md_data; +- struct session_op *sess = &state->d_sess; +- int digest; +- +- if ((digest = digest_nid_to_cryptodev(ctx->digest->type)) == NID_undef) { +- printf("cryptodev_digest_init: Can't get digest \n"); +- return (0); +- } +- +- memset(state, 0, sizeof(struct dev_crypto_state)); +- +- if ((state->d_fd = get_dev_crypto()) < 0) { +- printf("cryptodev_digest_init: Can't get Dev \n"); +- return (0); +- } +- +- sess->mackey = state->dummy_mac_key; +- sess->mackeylen = digest_key_length(ctx->digest->type); +- sess->mac = digest; +- +- if (ioctl(state->d_fd, CIOCGSESSION, sess) < 0) { +- put_dev_crypto(state->d_fd); +- state->d_fd = -1; +- printf("cryptodev_digest_init: Open session failed\n"); +- return (0); +- } ++ struct dev_crypto_state *state = ctx->md_data; ++ struct session_op *sess = &state->d_sess; ++ int digest; ++ ++ if ((digest = digest_nid_to_cryptodev(ctx->digest->type)) == NID_undef){ ++ printf("cryptodev_digest_init: Can't get digest \n"); ++ return (0); ++ } ++ memset(state, 0, sizeof(struct dev_crypto_state)); ++ ++ if ((state->d_fd = get_dev_crypto()) < 0) { ++ printf("cryptodev_digest_init: Can't get Dev \n"); ++ return (0); ++ } ++ ++ sess->mackey = NULL; ++ sess->mackeylen = 0; ++ sess->mac = digest; ++ ++ if (ioctl(state->d_fd, CIOCGSESSION, sess) < 0) { ++ put_dev_crypto(state->d_fd); ++ state->d_fd = -1; ++ printf("cryptodev_digest_init: Open session failed\n"); ++ return (0); ++ } + +- return (1); ++ return (1); + } + + static int cryptodev_digest_update(EVP_MD_CTX *ctx, const void *data, +- size_t count) ++ size_t count) + { +- struct crypt_op cryp; +- struct dev_crypto_state *state = ctx->md_data; +- struct session_op *sess = &state->d_sess; +- +- if (!data || state->d_fd < 0) { +- printf("cryptodev_digest_update: illegal inputs \n"); +- return (0); +- } +- +- if (!count) { +- return (0); +- } +- +- if (!(ctx->flags & EVP_MD_CTX_FLAG_ONESHOT)) { +- /* if application doesn't support one buffer */ +- char *mac_data = +- OPENSSL_realloc(state->mac_data, state->mac_len + count); +- +- if (mac_data == NULL) { +- printf("cryptodev_digest_update: realloc failed\n"); +- return (0); +- } +- +- state->mac_data = mac_data; +- memcpy(state->mac_data + state->mac_len, data, count); +- state->mac_len += count; +- +- return (1); +- } +- +- memset(&cryp, 0, sizeof(cryp)); +- +- cryp.ses = sess->ses; +- cryp.flags = 0; +- cryp.len = count; +- cryp.src = (caddr_t) data; +- cryp.dst = NULL; +- cryp.mac = (caddr_t) state->digest_res; +- if (ioctl(state->d_fd, CIOCCRYPT, &cryp) < 0) { +- printf("cryptodev_digest_update: digest failed\n"); +- return (0); +- } +- return (1); ++ struct dev_crypto_state *state = ctx->md_data; ++ struct crypt_op cryp; ++ struct session_op *sess = &state->d_sess; ++ char *new_mac_data; ++ ++ if (!data || state->d_fd < 0) { ++ printf("cryptodev_digest_update: illegal inputs \n"); ++ return (0); ++ } ++ ++ if (!count) { ++ return (1); ++ } ++ ++ if (!(ctx->flags & EVP_MD_CTX_FLAG_ONESHOT)) { ++ /* if application doesn't support one buffer */ ++ new_mac_data = OPENSSL_realloc(state->mac_data, state->mac_len + count); ++ ++ if (!new_mac_data) { ++ printf("cryptodev_digest_update: realloc failed\n"); ++ return (0); ++ } ++ state->mac_data = new_mac_data; ++ ++ memcpy(state->mac_data + state->mac_len, data, count); ++ state->mac_len += count; ++ ++ return (1); ++ } ++ ++ memset(&cryp, 0, sizeof(cryp)); ++ ++ cryp.ses = sess->ses; ++ cryp.flags = 0; ++ cryp.len = count; ++ cryp.src = (void*) data; ++ cryp.dst = NULL; ++ cryp.mac = (void*) state->digest_res; ++ if (ioctl(state->d_fd, CIOCCRYPT, &cryp) < 0) { ++ printf("cryptodev_digest_update: digest failed\n"); ++ return (0); ++ } ++ return (1); + } + ++ + static int cryptodev_digest_final(EVP_MD_CTX *ctx, unsigned char *md) + { +- struct crypt_op cryp; +- struct dev_crypto_state *state = ctx->md_data; +- struct session_op *sess = &state->d_sess; +- +- int ret = 1; +- +- if (!md || state->d_fd < 0) { +- printf("cryptodev_digest_final: illegal input\n"); +- return (0); +- } +- +- if (!(ctx->flags & EVP_MD_CTX_FLAG_ONESHOT)) { +- /* if application doesn't support one buffer */ +- memset(&cryp, 0, sizeof(cryp)); +- cryp.ses = sess->ses; +- cryp.flags = 0; +- cryp.len = state->mac_len; +- cryp.src = state->mac_data; +- cryp.dst = NULL; +- cryp.mac = (caddr_t) md; +- if (ioctl(state->d_fd, CIOCCRYPT, &cryp) < 0) { +- printf("cryptodev_digest_final: digest failed\n"); +- return (0); +- } ++ struct crypt_op cryp; ++ struct dev_crypto_state *state = ctx->md_data; ++ struct session_op *sess = &state->d_sess; ++ ++ if (!md || state->d_fd < 0) { ++ printf("cryptodev_digest_final: illegal input\n"); ++ return(0); ++ } ++ ++ if (! (ctx->flags & EVP_MD_CTX_FLAG_ONESHOT) ) { ++ /* if application doesn't support one buffer */ ++ memset(&cryp, 0, sizeof(cryp)); ++ cryp.ses = sess->ses; ++ cryp.flags = 0; ++ cryp.len = state->mac_len; ++ cryp.src = state->mac_data; ++ cryp.dst = NULL; ++ cryp.mac = (void*)md; ++ if (ioctl(state->d_fd, CIOCCRYPT, &cryp) < 0) { ++ printf("cryptodev_digest_final: digest failed\n"); ++ return (0); ++ } + +- return 1; +- } ++ return 1; ++ } + +- memcpy(md, state->digest_res, ctx->digest->md_size); ++ memcpy(md, state->digest_res, ctx->digest->md_size); + +- return (ret); ++ return 1; + } + ++ + static int cryptodev_digest_cleanup(EVP_MD_CTX *ctx) + { +- int ret = 1; +- struct dev_crypto_state *state = ctx->md_data; +- struct session_op *sess = &state->d_sess; +- +- if (state == NULL) +- return 0; +- +- if (state->d_fd < 0) { +- printf("cryptodev_digest_cleanup: illegal input\n"); +- return (0); +- } +- +- if (state->mac_data) { +- OPENSSL_free(state->mac_data); +- state->mac_data = NULL; +- state->mac_len = 0; +- } +- +- if (ioctl(state->d_fd, CIOCFSESSION, &sess->ses) < 0) { +- printf("cryptodev_digest_cleanup: failed to close session\n"); +- ret = 0; +- } else { +- ret = 1; +- } +- put_dev_crypto(state->d_fd); +- state->d_fd = -1; +- +- return (ret); +-} +- +-static int cryptodev_digest_copy(EVP_MD_CTX *to, const EVP_MD_CTX *from) +-{ +- struct dev_crypto_state *fstate = from->md_data; +- struct dev_crypto_state *dstate = to->md_data; +- struct session_op *sess; +- int digest; +- +- if (dstate == NULL || fstate == NULL) +- return 1; +- +- memcpy(dstate, fstate, sizeof(struct dev_crypto_state)); +- +- sess = &dstate->d_sess; +- +- digest = digest_nid_to_cryptodev(to->digest->type); +- +- sess->mackey = dstate->dummy_mac_key; +- sess->mackeylen = digest_key_length(to->digest->type); +- sess->mac = digest; +- +- dstate->d_fd = get_dev_crypto(); +- +- if (ioctl(dstate->d_fd, CIOCGSESSION, sess) < 0) { +- put_dev_crypto(dstate->d_fd); +- dstate->d_fd = -1; +- printf("cryptodev_digest_init: Open session failed\n"); +- return (0); +- } +- +- dstate->mac_len = fstate->mac_len; +- if (fstate->mac_len != 0) { +- if (fstate->mac_data != NULL) { +- dstate->mac_data = OPENSSL_malloc(fstate->mac_len); +- if (dstate->mac_data == NULL) { +- printf("cryptodev_digest_init: malloc failed\n"); +- return 0; +- } +- memcpy(dstate->mac_data, fstate->mac_data, fstate->mac_len); +- } +- } +- +- return 1; +-} +- +-const EVP_MD cryptodev_sha1 = { +- NID_sha1, +- NID_undef, +- SHA_DIGEST_LENGTH, +- EVP_MD_FLAG_ONESHOT, +- cryptodev_digest_init, +- cryptodev_digest_update, +- cryptodev_digest_final, +- cryptodev_digest_copy, +- cryptodev_digest_cleanup, +- EVP_PKEY_NULL_method, +- SHA_CBLOCK, +- sizeof(struct dev_crypto_state), +-}; +- +-const EVP_MD cryptodev_md5 = { +- NID_md5, +- NID_undef, +- 16 /* MD5_DIGEST_LENGTH */ , +- EVP_MD_FLAG_ONESHOT, +- cryptodev_digest_init, +- cryptodev_digest_update, +- cryptodev_digest_final, +- cryptodev_digest_copy, +- cryptodev_digest_cleanup, +- EVP_PKEY_NULL_method, +- 64 /* MD5_CBLOCK */ , +- sizeof(struct dev_crypto_state), ++ int ret = 1; ++ struct dev_crypto_state *state = ctx->md_data; ++ struct session_op *sess = &state->d_sess; ++ ++ if (state == NULL) ++ return 0; ++ ++ if (state->d_fd < 0) { ++ printf("cryptodev_digest_cleanup: illegal input\n"); ++ return (0); ++ } ++ ++ if (state->mac_data) { ++ OPENSSL_free(state->mac_data); ++ state->mac_data = NULL; ++ state->mac_len = 0; ++ } ++ ++ if (ioctl(state->d_fd, CIOCFSESSION, &sess->ses) < 0) { ++ printf("cryptodev_digest_cleanup: failed to close session\n"); ++ ret = 0; ++ } else { ++ ret = 1; ++ } ++ put_dev_crypto(state->d_fd); ++ state->d_fd = -1; ++ ++ return (ret); ++} ++ ++static int cryptodev_digest_copy(EVP_MD_CTX *to,const EVP_MD_CTX *from) ++{ ++ struct dev_crypto_state *fstate = from->md_data; ++ struct dev_crypto_state *dstate = to->md_data; ++ struct session_op *sess; ++ int digest; ++ ++ if (dstate == NULL || fstate == NULL) ++ return 1; ++ ++ memcpy(dstate, fstate, sizeof(struct dev_crypto_state)); ++ ++ sess = &dstate->d_sess; ++ ++ digest = digest_nid_to_cryptodev(to->digest->type); ++ ++ sess->mackey = NULL; ++ sess->mackeylen = 0; ++ sess->mac = digest; ++ ++ dstate->d_fd = get_dev_crypto(); ++ ++ if (ioctl(dstate->d_fd, CIOCGSESSION, sess) < 0) { ++ put_dev_crypto(dstate->d_fd); ++ dstate->d_fd = -1; ++ printf("cryptodev_digest_copy: Open session failed\n"); ++ return (0); ++ } ++ ++ if (fstate->mac_len != 0) { ++ if (fstate->mac_data != NULL) ++ { ++ dstate->mac_data = OPENSSL_malloc(fstate->mac_len); ++ if (dstate->mac_data == NULL) ++ { ++ printf("cryptodev_digest_copy: mac_data allocation failed\n"); ++ return (0); ++ } ++ memcpy(dstate->mac_data, fstate->mac_data, fstate->mac_len); ++ dstate->mac_len = fstate->mac_len; ++ } ++ } ++ ++ return 1; ++} ++ ++ ++static const EVP_MD cryptodev_sha1 = { ++ NID_sha1, ++ NID_sha1WithRSAEncryption, ++ SHA_DIGEST_LENGTH, ++#if defined(EVP_MD_FLAG_PKEY_METHOD_SIGNATURE) && defined(EVP_MD_FLAG_DIGALGID_ABSENT) ++ EVP_MD_FLAG_PKEY_METHOD_SIGNATURE| ++ EVP_MD_FLAG_DIGALGID_ABSENT| ++#endif ++ EVP_MD_FLAG_ONESHOT, ++ cryptodev_digest_init, ++ cryptodev_digest_update, ++ cryptodev_digest_final, ++ cryptodev_digest_copy, ++ cryptodev_digest_cleanup, ++ EVP_PKEY_RSA_method, ++ SHA_CBLOCK, ++ sizeof(EVP_MD *)+sizeof(struct dev_crypto_state), + }; + +-# endif /* USE_CRYPTODEV_DIGESTS */ ++static const EVP_MD cryptodev_sha256 = { ++ NID_sha256, ++ NID_sha256WithRSAEncryption, ++ SHA256_DIGEST_LENGTH, ++#if defined(EVP_MD_FLAG_PKEY_METHOD_SIGNATURE) && defined(EVP_MD_FLAG_DIGALGID_ABSENT) ++ EVP_MD_FLAG_PKEY_METHOD_SIGNATURE| ++ EVP_MD_FLAG_DIGALGID_ABSENT| ++#endif ++ EVP_MD_FLAG_ONESHOT, ++ cryptodev_digest_init, ++ cryptodev_digest_update, ++ cryptodev_digest_final, ++ cryptodev_digest_copy, ++ cryptodev_digest_cleanup, ++ EVP_PKEY_RSA_method, ++ SHA256_CBLOCK, ++ sizeof(EVP_MD *)+sizeof(struct dev_crypto_state), ++}; ++static const EVP_MD cryptodev_sha224 = { ++ NID_sha224, ++ NID_sha224WithRSAEncryption, ++ SHA224_DIGEST_LENGTH, ++#if defined(EVP_MD_FLAG_PKEY_METHOD_SIGNATURE) && defined(EVP_MD_FLAG_DIGALGID_ABSENT) ++ EVP_MD_FLAG_PKEY_METHOD_SIGNATURE| ++ EVP_MD_FLAG_DIGALGID_ABSENT| ++#endif ++ EVP_MD_FLAG_ONESHOT, ++ cryptodev_digest_init, ++ cryptodev_digest_update, ++ cryptodev_digest_final, ++ cryptodev_digest_copy, ++ cryptodev_digest_cleanup, ++ EVP_PKEY_RSA_method, ++ SHA256_CBLOCK, ++ sizeof(EVP_MD *)+sizeof(struct dev_crypto_state), ++}; ++ ++static const EVP_MD cryptodev_sha384 = { ++ NID_sha384, ++ NID_sha384WithRSAEncryption, ++ SHA384_DIGEST_LENGTH, ++#if defined(EVP_MD_FLAG_PKEY_METHOD_SIGNATURE) && defined(EVP_MD_FLAG_DIGALGID_ABSENT) ++ EVP_MD_FLAG_PKEY_METHOD_SIGNATURE| ++ EVP_MD_FLAG_DIGALGID_ABSENT| ++#endif ++ EVP_MD_FLAG_ONESHOT, ++ cryptodev_digest_init, ++ cryptodev_digest_update, ++ cryptodev_digest_final, ++ cryptodev_digest_copy, ++ cryptodev_digest_cleanup, ++ EVP_PKEY_RSA_method, ++ SHA512_CBLOCK, ++ sizeof(EVP_MD *)+sizeof(struct dev_crypto_state), ++}; ++ ++static const EVP_MD cryptodev_sha512 = { ++ NID_sha512, ++ NID_sha512WithRSAEncryption, ++ SHA512_DIGEST_LENGTH, ++#if defined(EVP_MD_FLAG_PKEY_METHOD_SIGNATURE) && defined(EVP_MD_FLAG_DIGALGID_ABSENT) ++ EVP_MD_FLAG_PKEY_METHOD_SIGNATURE| ++ EVP_MD_FLAG_DIGALGID_ABSENT| ++#endif ++ EVP_MD_FLAG_ONESHOT, ++ cryptodev_digest_init, ++ cryptodev_digest_update, ++ cryptodev_digest_final, ++ cryptodev_digest_copy, ++ cryptodev_digest_cleanup, ++ EVP_PKEY_RSA_method, ++ SHA512_CBLOCK, ++ sizeof(EVP_MD *)+sizeof(struct dev_crypto_state), ++}; ++ ++static const EVP_MD cryptodev_md5 = { ++ NID_md5, ++ NID_md5WithRSAEncryption, ++ 16 /* MD5_DIGEST_LENGTH */, ++#if defined(EVP_MD_FLAG_PKEY_METHOD_SIGNATURE) && defined(EVP_MD_FLAG_DIGALGID_ABSENT) ++ EVP_MD_FLAG_PKEY_METHOD_SIGNATURE| ++ EVP_MD_FLAG_DIGALGID_ABSENT| ++#endif ++ EVP_MD_FLAG_ONESHOT, ++ cryptodev_digest_init, ++ cryptodev_digest_update, ++ cryptodev_digest_final, ++ cryptodev_digest_copy, ++ cryptodev_digest_cleanup, ++ EVP_PKEY_RSA_method, ++ 64 /* MD5_CBLOCK */, ++ sizeof(EVP_MD *)+sizeof(struct dev_crypto_state), ++}; ++ ++#endif /* USE_CRYPTODEV_DIGESTS */ ++ + + static int + cryptodev_engine_digests(ENGINE *e, const EVP_MD **digest, +- const int **nids, int nid) ++ const int **nids, int nid) + { +- if (!digest) +- return (cryptodev_usable_digests(nids)); ++ if (!digest) ++ return (cryptodev_usable_digests(nids)); + +- switch (nid) { +-# ifdef USE_CRYPTODEV_DIGESTS +- case NID_md5: +- *digest = &cryptodev_md5; +- break; +- case NID_sha1: +- *digest = &cryptodev_sha1; +- break; +- default: +-# endif /* USE_CRYPTODEV_DIGESTS */ +- *digest = NULL; +- break; +- } +- return (*digest != NULL); ++ switch (nid) { ++#ifdef USE_CRYPTODEV_DIGESTS ++ case NID_md5: ++ *digest = &cryptodev_md5; ++ break; ++ case NID_sha1: ++ *digest = &cryptodev_sha1; ++ break; ++ case NID_sha224: ++ *digest = &cryptodev_sha224; ++ break; ++ case NID_sha256: ++ *digest = &cryptodev_sha256; ++ break; ++ case NID_sha384: ++ *digest = &cryptodev_sha384; ++ break; ++ case NID_sha512: ++ *digest = &cryptodev_sha512; ++ break; ++ default: ++#endif /* USE_CRYPTODEV_DIGESTS */ ++ *digest = NULL; ++ break; ++ } ++ return (*digest != NULL); + } + + /* +@@ -1011,434 +1084,420 @@ + * Upon completion of use, the caller is responsible for freeing + * crp->crp_p. + */ +-static int bn2crparam(const BIGNUM *a, struct crparam *crp) ++static int ++bn2crparam(const BIGNUM *a, struct crparam *crp) + { +- int i, j, k; +- ssize_t bytes, bits; +- u_char *b; +- +- crp->crp_p = NULL; +- crp->crp_nbits = 0; +- +- bits = BN_num_bits(a); +- bytes = (bits + 7) / 8; +- +- b = malloc(bytes); +- if (b == NULL) +- return (1); +- memset(b, 0, bytes); +- +- crp->crp_p = (caddr_t) b; +- crp->crp_nbits = bits; +- +- for (i = 0, j = 0; i < a->top; i++) { +- for (k = 0; k < BN_BITS2 / 8; k++) { +- if ((j + k) >= bytes) +- return (0); +- b[j + k] = a->d[i] >> (k * 8); +- } +- j += BN_BITS2 / 8; +- } +- return (0); ++ int i, j, k; ++ ssize_t bytes, bits; ++ u_char *b; ++ ++ crp->crp_p = NULL; ++ crp->crp_nbits = 0; ++ ++ bits = BN_num_bits(a); ++ bytes = (bits + 7) / 8; ++ ++ b = malloc(bytes); ++ if (b == NULL) ++ return (1); ++ memset(b, 0, bytes); ++ ++ crp->crp_p = (void*) b; ++ crp->crp_nbits = bits; ++ ++ for (i = 0, j = 0; i < a->top; i++) { ++ for (k = 0; k < BN_BITS2 / 8; k++) { ++ if ((j + k) >= bytes) ++ return (0); ++ b[j + k] = a->d[i] >> (k * 8); ++ } ++ j += BN_BITS2 / 8; ++ } ++ return (0); + } + + /* Convert a /dev/crypto parameter to a BIGNUM */ +-static int crparam2bn(struct crparam *crp, BIGNUM *a) ++static int ++crparam2bn(struct crparam *crp, BIGNUM *a) + { +- u_int8_t *pd; +- int i, bytes; ++ u_int8_t *pd; ++ int i, bytes; + +- bytes = (crp->crp_nbits + 7) / 8; ++ bytes = (crp->crp_nbits + 7) / 8; + +- if (bytes == 0) +- return (-1); ++ if (bytes == 0) ++ return (-1); + +- if ((pd = (u_int8_t *) malloc(bytes)) == NULL) +- return (-1); ++ if ((pd = (u_int8_t *) malloc(bytes)) == NULL) ++ return (-1); + +- for (i = 0; i < bytes; i++) +- pd[i] = crp->crp_p[bytes - i - 1]; ++ for (i = 0; i < bytes; i++) ++ pd[i] = crp->crp_p[bytes - i - 1]; + +- BN_bin2bn(pd, bytes, a); +- free(pd); ++ BN_bin2bn(pd, bytes, a); ++ free(pd); + +- return (0); ++ return (0); + } + +-static void zapparams(struct crypt_kop *kop) ++static void ++zapparams(struct crypt_kop *kop) + { +- int i; ++ int i; + +- for (i = 0; i < kop->crk_iparams + kop->crk_oparams; i++) { +- OPENSSL_free(kop->crk_param[i].crp_p); +- kop->crk_param[i].crp_p = NULL; +- kop->crk_param[i].crp_nbits = 0; +- } ++ for (i = 0; i < kop->crk_iparams + kop->crk_oparams; i++) { ++ if (kop->crk_param[i].crp_p) ++ free(kop->crk_param[i].crp_p); ++ kop->crk_param[i].crp_p = NULL; ++ kop->crk_param[i].crp_nbits = 0; ++ } + } + + static int +-cryptodev_asym(struct crypt_kop *kop, int rlen, BIGNUM *r, int slen, +- BIGNUM *s) ++cryptodev_asym(struct crypt_kop *kop, int rlen, BIGNUM *r, int slen, BIGNUM *s) + { +- int fd, ret = -1; +- +- if ((fd = get_asym_dev_crypto()) < 0) +- return ret; ++ int fd, ret = -1; + +- if (r) { +- kop->crk_param[kop->crk_iparams].crp_p = OPENSSL_malloc(rlen); +- if (kop->crk_param[kop->crk_iparams].crp_p == NULL) +- return ret; +- memset(kop->crk_param[kop->crk_iparams].crp_p, 0, (size_t)rlen); +- kop->crk_param[kop->crk_iparams].crp_nbits = rlen * 8; +- kop->crk_oparams++; +- } +- if (s) { +- kop->crk_param[kop->crk_iparams + 1].crp_p = OPENSSL_malloc(slen); +- /* No need to free the kop->crk_iparams parameter if it was allocated, +- * callers of this routine have to free allocated parameters through +- * zapparams both in case of success and failure +- */ +- if (kop->crk_param[kop->crk_iparams+1].crp_p == NULL) +- return ret; +- memset(kop->crk_param[kop->crk_iparams + 1].crp_p, 0, (size_t)slen); +- kop->crk_param[kop->crk_iparams + 1].crp_nbits = slen * 8; +- kop->crk_oparams++; +- } ++ if ((fd = get_asym_dev_crypto()) < 0) ++ return (ret); + +- if (ioctl(fd, CIOCKEY, kop) == 0) { +- if (r) +- crparam2bn(&kop->crk_param[kop->crk_iparams], r); +- if (s) +- crparam2bn(&kop->crk_param[kop->crk_iparams + 1], s); +- ret = 0; +- } ++ if (r) { ++ kop->crk_param[kop->crk_iparams].crp_p = calloc(rlen, sizeof(char)); ++ kop->crk_param[kop->crk_iparams].crp_nbits = rlen * 8; ++ kop->crk_oparams++; ++ } ++ if (s) { ++ kop->crk_param[kop->crk_iparams+1].crp_p = calloc(slen, sizeof(char)); ++ kop->crk_param[kop->crk_iparams+1].crp_nbits = slen * 8; ++ kop->crk_oparams++; ++ } ++ ++ if (ioctl(fd, CIOCKEY, kop) == 0) { ++ if (r) ++ crparam2bn(&kop->crk_param[kop->crk_iparams], r); ++ if (s) ++ crparam2bn(&kop->crk_param[kop->crk_iparams+1], s); ++ ret = 0; ++ } + +- return ret; ++ return (ret); + } + + static int + cryptodev_bn_mod_exp(BIGNUM *r, const BIGNUM *a, const BIGNUM *p, +- const BIGNUM *m, BN_CTX *ctx, BN_MONT_CTX *in_mont) ++ const BIGNUM *m, BN_CTX *ctx, BN_MONT_CTX *in_mont) + { +- struct crypt_kop kop; +- int ret = 1; ++ struct crypt_kop kop; ++ int ret = 1; + +- /* +- * Currently, we know we can do mod exp iff we can do any asymmetric +- * operations at all. +- */ +- if (cryptodev_asymfeat == 0) { +- ret = BN_mod_exp(r, a, p, m, ctx); +- return (ret); +- } +- +- memset(&kop, 0, sizeof kop); +- kop.crk_op = CRK_MOD_EXP; +- +- /* inputs: a^p % m */ +- if (bn2crparam(a, &kop.crk_param[0])) +- goto err; +- if (bn2crparam(p, &kop.crk_param[1])) +- goto err; +- if (bn2crparam(m, &kop.crk_param[2])) +- goto err; +- kop.crk_iparams = 3; +- +- if (cryptodev_asym(&kop, BN_num_bytes(m), r, 0, NULL)) { +- const RSA_METHOD *meth = RSA_PKCS1_SSLeay(); +- printf("OCF asym process failed, Running in software\n"); +- ret = meth->bn_mod_exp(r, a, p, m, ctx, in_mont); +- +- } else if (ECANCELED == kop.crk_status) { +- const RSA_METHOD *meth = RSA_PKCS1_SSLeay(); +- printf("OCF hardware operation cancelled. Running in Software\n"); +- ret = meth->bn_mod_exp(r, a, p, m, ctx, in_mont); +- } +- /* else cryptodev operation worked ok ==> ret = 1 */ +- +- err: +- zapparams(&kop); +- return (ret); ++ /* Currently, we know we can do mod exp iff we can do any ++ * asymmetric operations at all. ++ */ ++ if (cryptodev_asymfeat == 0) { ++ ret = BN_mod_exp(r, a, p, m, ctx); ++ return (ret); ++ } ++ ++ memset(&kop, 0, sizeof kop); ++ kop.crk_op = CRK_MOD_EXP; ++ ++ /* inputs: a^p % m */ ++ if (bn2crparam(a, &kop.crk_param[0])) ++ goto err; ++ if (bn2crparam(p, &kop.crk_param[1])) ++ goto err; ++ if (bn2crparam(m, &kop.crk_param[2])) ++ goto err; ++ kop.crk_iparams = 3; ++ ++ if (cryptodev_asym(&kop, BN_num_bytes(m), r, 0, NULL)) { ++ const RSA_METHOD *meth = RSA_PKCS1_SSLeay(); ++ printf("OCF asym process failed, Running in software\n"); ++ ret = meth->bn_mod_exp(r, a, p, m, ctx, in_mont); ++ ++ } else if (ECANCELED == kop.crk_status) { ++ const RSA_METHOD *meth = RSA_PKCS1_SSLeay(); ++ printf("OCF hardware operation cancelled. Running in Software\n"); ++ ret = meth->bn_mod_exp(r, a, p, m, ctx, in_mont); ++ } ++ /* else cryptodev operation worked ok ==> ret = 1*/ ++ ++err: ++ zapparams(&kop); ++ return (ret); + } + + static int +-cryptodev_rsa_nocrt_mod_exp(BIGNUM *r0, const BIGNUM *I, RSA *rsa, +- BN_CTX *ctx) +-{ +- int r; +- ctx = BN_CTX_new(); +- r = cryptodev_bn_mod_exp(r0, I, rsa->d, rsa->n, ctx, NULL); +- BN_CTX_free(ctx); +- return (r); ++cryptodev_rsa_nocrt_mod_exp(BIGNUM *r0, const BIGNUM *I, RSA *rsa, BN_CTX *ctx) ++{ ++ int r; ++ ctx = BN_CTX_new(); ++ r = cryptodev_bn_mod_exp(r0, I, rsa->d, rsa->n, ctx, NULL); ++ BN_CTX_free(ctx); ++ return (r); + } + + static int + cryptodev_rsa_mod_exp(BIGNUM *r0, const BIGNUM *I, RSA *rsa, BN_CTX *ctx) + { +- struct crypt_kop kop; +- int ret = 1; ++ struct crypt_kop kop; ++ int ret = 1; + +- if (!rsa->p || !rsa->q || !rsa->dmp1 || !rsa->dmq1 || !rsa->iqmp) { +- /* XXX 0 means failure?? */ +- return (0); +- } +- +- memset(&kop, 0, sizeof kop); +- kop.crk_op = CRK_MOD_EXP_CRT; +- /* inputs: rsa->p rsa->q I rsa->dmp1 rsa->dmq1 rsa->iqmp */ +- if (bn2crparam(rsa->p, &kop.crk_param[0])) +- goto err; +- if (bn2crparam(rsa->q, &kop.crk_param[1])) +- goto err; +- if (bn2crparam(I, &kop.crk_param[2])) +- goto err; +- if (bn2crparam(rsa->dmp1, &kop.crk_param[3])) +- goto err; +- if (bn2crparam(rsa->dmq1, &kop.crk_param[4])) +- goto err; +- if (bn2crparam(rsa->iqmp, &kop.crk_param[5])) +- goto err; +- kop.crk_iparams = 6; +- +- if (cryptodev_asym(&kop, BN_num_bytes(rsa->n), r0, 0, NULL)) { +- const RSA_METHOD *meth = RSA_PKCS1_SSLeay(); +- printf("OCF asym process failed, running in Software\n"); +- ret = (*meth->rsa_mod_exp) (r0, I, rsa, ctx); +- +- } else if (ECANCELED == kop.crk_status) { +- const RSA_METHOD *meth = RSA_PKCS1_SSLeay(); +- printf("OCF hardware operation cancelled. Running in Software\n"); +- ret = (*meth->rsa_mod_exp) (r0, I, rsa, ctx); +- } +- /* else cryptodev operation worked ok ==> ret = 1 */ +- +- err: +- zapparams(&kop); +- return (ret); ++ if (!rsa->p || !rsa->q || !rsa->dmp1 || !rsa->dmq1 || !rsa->iqmp) { ++ /* XXX 0 means failure?? */ ++ return (0); ++ } ++ ++ memset(&kop, 0, sizeof kop); ++ kop.crk_op = CRK_MOD_EXP_CRT; ++ /* inputs: rsa->p rsa->q I rsa->dmp1 rsa->dmq1 rsa->iqmp */ ++ if (bn2crparam(rsa->p, &kop.crk_param[0])) ++ goto err; ++ if (bn2crparam(rsa->q, &kop.crk_param[1])) ++ goto err; ++ if (bn2crparam(I, &kop.crk_param[2])) ++ goto err; ++ if (bn2crparam(rsa->dmp1, &kop.crk_param[3])) ++ goto err; ++ if (bn2crparam(rsa->dmq1, &kop.crk_param[4])) ++ goto err; ++ if (bn2crparam(rsa->iqmp, &kop.crk_param[5])) ++ goto err; ++ kop.crk_iparams = 6; ++ ++ if (cryptodev_asym(&kop, BN_num_bytes(rsa->n), r0, 0, NULL)) { ++ const RSA_METHOD *meth = RSA_PKCS1_SSLeay(); ++ printf("OCF asym process failed, running in Software\n"); ++ ret = (*meth->rsa_mod_exp)(r0, I, rsa, ctx); ++ ++ } else if (ECANCELED == kop.crk_status) { ++ const RSA_METHOD *meth = RSA_PKCS1_SSLeay(); ++ printf("OCF hardware operation cancelled. Running in Software\n"); ++ ret = (*meth->rsa_mod_exp)(r0, I, rsa, ctx); ++ } ++ /* else cryptodev operation worked ok ==> ret = 1*/ ++ ++err: ++ zapparams(&kop); ++ return (ret); + } + + static RSA_METHOD cryptodev_rsa = { +- "cryptodev RSA method", +- NULL, /* rsa_pub_enc */ +- NULL, /* rsa_pub_dec */ +- NULL, /* rsa_priv_enc */ +- NULL, /* rsa_priv_dec */ +- NULL, +- NULL, +- NULL, /* init */ +- NULL, /* finish */ +- 0, /* flags */ +- NULL, /* app_data */ +- NULL, /* rsa_sign */ +- NULL /* rsa_verify */ ++ "cryptodev RSA method", ++ NULL, /* rsa_pub_enc */ ++ NULL, /* rsa_pub_dec */ ++ NULL, /* rsa_priv_enc */ ++ NULL, /* rsa_priv_dec */ ++ NULL, ++ NULL, ++ NULL, /* init */ ++ NULL, /* finish */ ++ 0, /* flags */ ++ NULL, /* app_data */ ++ NULL, /* rsa_sign */ ++ NULL /* rsa_verify */ + }; + + static int + cryptodev_dsa_bn_mod_exp(DSA *dsa, BIGNUM *r, BIGNUM *a, const BIGNUM *p, +- const BIGNUM *m, BN_CTX *ctx, BN_MONT_CTX *m_ctx) ++ const BIGNUM *m, BN_CTX *ctx, BN_MONT_CTX *m_ctx) + { +- return (cryptodev_bn_mod_exp(r, a, p, m, ctx, m_ctx)); ++ return (cryptodev_bn_mod_exp(r, a, p, m, ctx, m_ctx)); + } + + static int + cryptodev_dsa_dsa_mod_exp(DSA *dsa, BIGNUM *t1, BIGNUM *g, +- BIGNUM *u1, BIGNUM *pub_key, BIGNUM *u2, BIGNUM *p, +- BN_CTX *ctx, BN_MONT_CTX *mont) ++ BIGNUM *u1, BIGNUM *pub_key, BIGNUM *u2, BIGNUM *p, ++ BN_CTX *ctx, BN_MONT_CTX *mont) + { +- BIGNUM t2; +- int ret = 0; ++ BIGNUM t2; ++ int ret = 0; + +- BN_init(&t2); ++ BN_init(&t2); + +- /* v = ( g^u1 * y^u2 mod p ) mod q */ +- /* let t1 = g ^ u1 mod p */ +- ret = 0; +- +- if (!dsa->meth->bn_mod_exp(dsa, t1, dsa->g, u1, dsa->p, ctx, mont)) +- goto err; +- +- /* let t2 = y ^ u2 mod p */ +- if (!dsa->meth->bn_mod_exp(dsa, &t2, dsa->pub_key, u2, dsa->p, ctx, mont)) +- goto err; +- /* let u1 = t1 * t2 mod p */ +- if (!BN_mod_mul(u1, t1, &t2, dsa->p, ctx)) +- goto err; +- +- BN_copy(t1, u1); +- +- ret = 1; +- err: +- BN_free(&t2); +- return (ret); +-} +- +-static DSA_SIG *cryptodev_dsa_do_sign(const unsigned char *dgst, int dlen, +- DSA *dsa) +-{ +- struct crypt_kop kop; +- BIGNUM *r = NULL, *s = NULL; +- DSA_SIG *dsaret = NULL; +- +- if ((r = BN_new()) == NULL) +- goto err; +- if ((s = BN_new()) == NULL) { +- BN_free(r); +- goto err; +- } +- +- memset(&kop, 0, sizeof kop); +- kop.crk_op = CRK_DSA_SIGN; +- +- /* inputs: dgst dsa->p dsa->q dsa->g dsa->priv_key */ +- kop.crk_param[0].crp_p = (caddr_t) dgst; +- kop.crk_param[0].crp_nbits = dlen * 8; +- if (bn2crparam(dsa->p, &kop.crk_param[1])) +- goto err; +- if (bn2crparam(dsa->q, &kop.crk_param[2])) +- goto err; +- if (bn2crparam(dsa->g, &kop.crk_param[3])) +- goto err; +- if (bn2crparam(dsa->priv_key, &kop.crk_param[4])) +- goto err; +- kop.crk_iparams = 5; +- +- if (cryptodev_asym(&kop, BN_num_bytes(dsa->q), r, +- BN_num_bytes(dsa->q), s) == 0) { +- dsaret = DSA_SIG_new(); +- if (dsaret == NULL) +- goto err; +- dsaret->r = r; +- dsaret->s = s; +- r = s = NULL; +- } else { +- const DSA_METHOD *meth = DSA_OpenSSL(); +- dsaret = (meth->dsa_do_sign) (dgst, dlen, dsa); +- } +- err: +- BN_free(r); +- BN_free(s); +- kop.crk_param[0].crp_p = NULL; +- zapparams(&kop); +- return (dsaret); ++ /* v = ( g^u1 * y^u2 mod p ) mod q */ ++ /* let t1 = g ^ u1 mod p */ ++ ret = 0; ++ ++ if (!dsa->meth->bn_mod_exp(dsa,t1,dsa->g,u1,dsa->p,ctx,mont)) ++ goto err; ++ ++ /* let t2 = y ^ u2 mod p */ ++ if (!dsa->meth->bn_mod_exp(dsa,&t2,dsa->pub_key,u2,dsa->p,ctx,mont)) ++ goto err; ++ /* let u1 = t1 * t2 mod p */ ++ if (!BN_mod_mul(u1,t1,&t2,dsa->p,ctx)) ++ goto err; ++ ++ BN_copy(t1,u1); ++ ++ ret = 1; ++err: ++ BN_free(&t2); ++ return(ret); ++} ++ ++static DSA_SIG * ++cryptodev_dsa_do_sign(const unsigned char *dgst, int dlen, DSA *dsa) ++{ ++ struct crypt_kop kop; ++ BIGNUM *r = NULL, *s = NULL; ++ DSA_SIG *dsaret = NULL; ++ ++ if ((r = BN_new()) == NULL) ++ goto err; ++ if ((s = BN_new()) == NULL) { ++ BN_free(r); ++ goto err; ++ } ++ ++ memset(&kop, 0, sizeof kop); ++ kop.crk_op = CRK_DSA_SIGN; ++ ++ /* inputs: dgst dsa->p dsa->q dsa->g dsa->priv_key */ ++ kop.crk_param[0].crp_p = (void*)dgst; ++ kop.crk_param[0].crp_nbits = dlen * 8; ++ if (bn2crparam(dsa->p, &kop.crk_param[1])) ++ goto err; ++ if (bn2crparam(dsa->q, &kop.crk_param[2])) ++ goto err; ++ if (bn2crparam(dsa->g, &kop.crk_param[3])) ++ goto err; ++ if (bn2crparam(dsa->priv_key, &kop.crk_param[4])) ++ goto err; ++ kop.crk_iparams = 5; ++ ++ if (cryptodev_asym(&kop, BN_num_bytes(dsa->q), r, ++ BN_num_bytes(dsa->q), s) == 0) { ++ dsaret = DSA_SIG_new(); ++ dsaret->r = r; ++ dsaret->s = s; ++ } else { ++ const DSA_METHOD *meth = DSA_OpenSSL(); ++ BN_free(r); ++ BN_free(s); ++ dsaret = (meth->dsa_do_sign)(dgst, dlen, dsa); ++ } ++err: ++ kop.crk_param[0].crp_p = NULL; ++ zapparams(&kop); ++ return (dsaret); + } + + static int + cryptodev_dsa_verify(const unsigned char *dgst, int dlen, +- DSA_SIG *sig, DSA *dsa) ++ DSA_SIG *sig, DSA *dsa) + { +- struct crypt_kop kop; +- int dsaret = 1; ++ struct crypt_kop kop; ++ int dsaret = 1; + +- memset(&kop, 0, sizeof kop); +- kop.crk_op = CRK_DSA_VERIFY; ++ memset(&kop, 0, sizeof kop); ++ kop.crk_op = CRK_DSA_VERIFY; + +- /* inputs: dgst dsa->p dsa->q dsa->g dsa->pub_key sig->r sig->s */ +- kop.crk_param[0].crp_p = (caddr_t) dgst; +- kop.crk_param[0].crp_nbits = dlen * 8; +- if (bn2crparam(dsa->p, &kop.crk_param[1])) +- goto err; +- if (bn2crparam(dsa->q, &kop.crk_param[2])) +- goto err; +- if (bn2crparam(dsa->g, &kop.crk_param[3])) +- goto err; +- if (bn2crparam(dsa->pub_key, &kop.crk_param[4])) +- goto err; +- if (bn2crparam(sig->r, &kop.crk_param[5])) +- goto err; +- if (bn2crparam(sig->s, &kop.crk_param[6])) +- goto err; +- kop.crk_iparams = 7; +- +- if (cryptodev_asym(&kop, 0, NULL, 0, NULL) == 0) { +- /* +- * OCF success value is 0, if not zero, change dsaret to fail +- */ +- if (0 != kop.crk_status) +- dsaret = 0; +- } else { +- const DSA_METHOD *meth = DSA_OpenSSL(); +- +- dsaret = (meth->dsa_do_verify) (dgst, dlen, sig, dsa); +- } +- err: +- kop.crk_param[0].crp_p = NULL; +- zapparams(&kop); +- return (dsaret); ++ /* inputs: dgst dsa->p dsa->q dsa->g dsa->pub_key sig->r sig->s */ ++ kop.crk_param[0].crp_p = (void*)dgst; ++ kop.crk_param[0].crp_nbits = dlen * 8; ++ if (bn2crparam(dsa->p, &kop.crk_param[1])) ++ goto err; ++ if (bn2crparam(dsa->q, &kop.crk_param[2])) ++ goto err; ++ if (bn2crparam(dsa->g, &kop.crk_param[3])) ++ goto err; ++ if (bn2crparam(dsa->pub_key, &kop.crk_param[4])) ++ goto err; ++ if (bn2crparam(sig->r, &kop.crk_param[5])) ++ goto err; ++ if (bn2crparam(sig->s, &kop.crk_param[6])) ++ goto err; ++ kop.crk_iparams = 7; ++ ++ if (cryptodev_asym(&kop, 0, NULL, 0, NULL) == 0) { ++/*OCF success value is 0, if not zero, change dsaret to fail*/ ++ if(0 != kop.crk_status) dsaret = 0; ++ } else { ++ const DSA_METHOD *meth = DSA_OpenSSL(); ++ ++ dsaret = (meth->dsa_do_verify)(dgst, dlen, sig, dsa); ++ } ++err: ++ kop.crk_param[0].crp_p = NULL; ++ zapparams(&kop); ++ return (dsaret); + } + + static DSA_METHOD cryptodev_dsa = { +- "cryptodev DSA method", +- NULL, +- NULL, /* dsa_sign_setup */ +- NULL, +- NULL, /* dsa_mod_exp */ +- NULL, +- NULL, /* init */ +- NULL, /* finish */ +- 0, /* flags */ +- NULL /* app_data */ ++ "cryptodev DSA method", ++ NULL, ++ NULL, /* dsa_sign_setup */ ++ NULL, ++ NULL, /* dsa_mod_exp */ ++ NULL, ++ NULL, /* init */ ++ NULL, /* finish */ ++ 0, /* flags */ ++ NULL /* app_data */ + }; + + static int + cryptodev_mod_exp_dh(const DH *dh, BIGNUM *r, const BIGNUM *a, +- const BIGNUM *p, const BIGNUM *m, BN_CTX *ctx, +- BN_MONT_CTX *m_ctx) ++ const BIGNUM *p, const BIGNUM *m, BN_CTX *ctx, ++ BN_MONT_CTX *m_ctx) + { +- return (cryptodev_bn_mod_exp(r, a, p, m, ctx, m_ctx)); ++ return (cryptodev_bn_mod_exp(r, a, p, m, ctx, m_ctx)); + } + + static int + cryptodev_dh_compute_key(unsigned char *key, const BIGNUM *pub_key, DH *dh) + { +- struct crypt_kop kop; +- int dhret = 1; +- int fd, keylen; +- +- if ((fd = get_asym_dev_crypto()) < 0) { +- const DH_METHOD *meth = DH_OpenSSL(); +- +- return ((meth->compute_key) (key, pub_key, dh)); +- } +- +- keylen = BN_num_bits(dh->p); +- +- memset(&kop, 0, sizeof kop); +- kop.crk_op = CRK_DH_COMPUTE_KEY; +- +- /* inputs: dh->priv_key pub_key dh->p key */ +- if (bn2crparam(dh->priv_key, &kop.crk_param[0])) +- goto err; +- if (bn2crparam(pub_key, &kop.crk_param[1])) +- goto err; +- if (bn2crparam(dh->p, &kop.crk_param[2])) +- goto err; +- kop.crk_iparams = 3; +- +- kop.crk_param[3].crp_p = (caddr_t) key; +- kop.crk_param[3].crp_nbits = keylen * 8; +- kop.crk_oparams = 1; +- +- if (ioctl(fd, CIOCKEY, &kop) == -1) { +- const DH_METHOD *meth = DH_OpenSSL(); +- +- dhret = (meth->compute_key) (key, pub_key, dh); +- } +- err: +- kop.crk_param[3].crp_p = NULL; +- zapparams(&kop); +- return (dhret); ++ struct crypt_kop kop; ++ int dhret = 1; ++ int fd, keylen; ++ ++ if ((fd = get_asym_dev_crypto()) < 0) { ++ const DH_METHOD *meth = DH_OpenSSL(); ++ ++ return ((meth->compute_key)(key, pub_key, dh)); ++ } ++ ++ keylen = BN_num_bits(dh->p); ++ ++ memset(&kop, 0, sizeof kop); ++ kop.crk_op = CRK_DH_COMPUTE_KEY; ++ ++ /* inputs: dh->priv_key pub_key dh->p key */ ++ if (bn2crparam(dh->priv_key, &kop.crk_param[0])) ++ goto err; ++ if (bn2crparam(pub_key, &kop.crk_param[1])) ++ goto err; ++ if (bn2crparam(dh->p, &kop.crk_param[2])) ++ goto err; ++ kop.crk_iparams = 3; ++ ++ kop.crk_param[3].crp_p = (void*) key; ++ kop.crk_param[3].crp_nbits = keylen; ++ kop.crk_oparams = 1; ++ dhret = keylen/8; ++ ++ if (ioctl(fd, CIOCKEY, &kop) == -1) { ++ const DH_METHOD *meth = DH_OpenSSL(); ++ ++ dhret = (meth->compute_key)(key, pub_key, dh); ++ } ++err: ++ kop.crk_param[3].crp_p = NULL; ++ zapparams(&kop); ++ return (dhret); + } + + static DH_METHOD cryptodev_dh = { +- "cryptodev DH method", +- NULL, /* cryptodev_dh_generate_key */ +- NULL, +- NULL, +- NULL, +- NULL, +- 0, /* flags */ +- NULL /* app_data */ ++ "cryptodev DH method", ++ NULL, /* cryptodev_dh_generate_key */ ++ NULL, ++ NULL, ++ NULL, ++ NULL, ++ 0, /* flags */ ++ NULL /* app_data */ + }; + + /* +@@ -1446,104 +1505,109 @@ + * but I expect we'll want some options soon. + */ + static int +-cryptodev_ctrl(ENGINE *e, int cmd, long i, void *p, void (*f) (void)) ++cryptodev_ctrl(ENGINE *e, int cmd, long i, void *p, void (*f)(void)) + { +-# ifdef HAVE_SYSLOG_R +- struct syslog_data sd = SYSLOG_DATA_INIT; +-# endif ++#ifdef HAVE_SYSLOG_R ++ struct syslog_data sd = SYSLOG_DATA_INIT; ++#endif + +- switch (cmd) { +- default: +-# ifdef HAVE_SYSLOG_R +- syslog_r(LOG_ERR, &sd, "cryptodev_ctrl: unknown command %d", cmd); +-# else +- syslog(LOG_ERR, "cryptodev_ctrl: unknown command %d", cmd); +-# endif +- break; +- } +- return (1); +-} +- +-void ENGINE_load_cryptodev(void) +-{ +- ENGINE *engine = ENGINE_new(); +- int fd; +- +- if (engine == NULL) +- return; +- if ((fd = get_dev_crypto()) < 0) { +- ENGINE_free(engine); +- return; +- } +- +- /* +- * find out what asymmetric crypto algorithms we support +- */ +- if (ioctl(fd, CIOCASYMFEAT, &cryptodev_asymfeat) == -1) { +- put_dev_crypto(fd); +- ENGINE_free(engine); +- return; +- } +- put_dev_crypto(fd); +- +- if (!ENGINE_set_id(engine, "cryptodev") || +- !ENGINE_set_name(engine, "BSD cryptodev engine") || +- !ENGINE_set_ciphers(engine, cryptodev_engine_ciphers) || +- !ENGINE_set_digests(engine, cryptodev_engine_digests) || +- !ENGINE_set_ctrl_function(engine, cryptodev_ctrl) || +- !ENGINE_set_cmd_defns(engine, cryptodev_defns)) { +- ENGINE_free(engine); +- return; +- } +- +- if (ENGINE_set_RSA(engine, &cryptodev_rsa)) { +- const RSA_METHOD *rsa_meth = RSA_PKCS1_SSLeay(); +- +- cryptodev_rsa.bn_mod_exp = rsa_meth->bn_mod_exp; +- cryptodev_rsa.rsa_mod_exp = rsa_meth->rsa_mod_exp; +- cryptodev_rsa.rsa_pub_enc = rsa_meth->rsa_pub_enc; +- cryptodev_rsa.rsa_pub_dec = rsa_meth->rsa_pub_dec; +- cryptodev_rsa.rsa_priv_enc = rsa_meth->rsa_priv_enc; +- cryptodev_rsa.rsa_priv_dec = rsa_meth->rsa_priv_dec; +- if (cryptodev_asymfeat & CRF_MOD_EXP) { +- cryptodev_rsa.bn_mod_exp = cryptodev_bn_mod_exp; +- if (cryptodev_asymfeat & CRF_MOD_EXP_CRT) +- cryptodev_rsa.rsa_mod_exp = cryptodev_rsa_mod_exp; +- else +- cryptodev_rsa.rsa_mod_exp = cryptodev_rsa_nocrt_mod_exp; +- } +- } +- +- if (ENGINE_set_DSA(engine, &cryptodev_dsa)) { +- const DSA_METHOD *meth = DSA_OpenSSL(); +- +- memcpy(&cryptodev_dsa, meth, sizeof(DSA_METHOD)); +- if (cryptodev_asymfeat & CRF_DSA_SIGN) +- cryptodev_dsa.dsa_do_sign = cryptodev_dsa_do_sign; +- if (cryptodev_asymfeat & CRF_MOD_EXP) { +- cryptodev_dsa.bn_mod_exp = cryptodev_dsa_bn_mod_exp; +- cryptodev_dsa.dsa_mod_exp = cryptodev_dsa_dsa_mod_exp; +- } +- if (cryptodev_asymfeat & CRF_DSA_VERIFY) +- cryptodev_dsa.dsa_do_verify = cryptodev_dsa_verify; +- } +- +- if (ENGINE_set_DH(engine, &cryptodev_dh)) { +- const DH_METHOD *dh_meth = DH_OpenSSL(); +- +- cryptodev_dh.generate_key = dh_meth->generate_key; +- cryptodev_dh.compute_key = dh_meth->compute_key; +- cryptodev_dh.bn_mod_exp = dh_meth->bn_mod_exp; +- if (cryptodev_asymfeat & CRF_MOD_EXP) { +- cryptodev_dh.bn_mod_exp = cryptodev_mod_exp_dh; +- if (cryptodev_asymfeat & CRF_DH_COMPUTE_KEY) +- cryptodev_dh.compute_key = cryptodev_dh_compute_key; +- } +- } +- +- ENGINE_add(engine); +- ENGINE_free(engine); +- ERR_clear_error(); ++ switch (cmd) { ++ default: ++#ifdef HAVE_SYSLOG_R ++ syslog_r(LOG_ERR, &sd, ++ "cryptodev_ctrl: unknown command %d", cmd); ++#else ++ syslog(LOG_ERR, "cryptodev_ctrl: unknown command %d", cmd); ++#endif ++ break; ++ } ++ return (1); ++} ++ ++void ++ENGINE_load_cryptodev(void) ++{ ++ ENGINE *engine = ENGINE_new(); ++ int fd; ++ ++ if (engine == NULL) ++ return; ++ if ((fd = get_dev_crypto()) < 0) { ++ ENGINE_free(engine); ++ return; ++ } ++ ++ /* ++ * find out what asymmetric crypto algorithms we support ++ */ ++ if (ioctl(fd, CIOCASYMFEAT, &cryptodev_asymfeat) == -1) { ++ put_dev_crypto(fd); ++ ENGINE_free(engine); ++ return; ++ } ++ put_dev_crypto(fd); ++ ++ if (!ENGINE_set_id(engine, "cryptodev") || ++ !ENGINE_set_name(engine, "cryptodev engine") || ++ !ENGINE_set_ciphers(engine, cryptodev_engine_ciphers) || ++ !ENGINE_set_digests(engine, cryptodev_engine_digests) || ++ !ENGINE_set_ctrl_function(engine, cryptodev_ctrl) || ++ !ENGINE_set_cmd_defns(engine, cryptodev_defns)) { ++ ENGINE_free(engine); ++ return; ++ } ++ ++ if (ENGINE_set_RSA(engine, &cryptodev_rsa)) { ++ const RSA_METHOD *rsa_meth = RSA_PKCS1_SSLeay(); ++ ++ cryptodev_rsa.bn_mod_exp = rsa_meth->bn_mod_exp; ++ cryptodev_rsa.rsa_mod_exp = rsa_meth->rsa_mod_exp; ++ cryptodev_rsa.rsa_pub_enc = rsa_meth->rsa_pub_enc; ++ cryptodev_rsa.rsa_pub_dec = rsa_meth->rsa_pub_dec; ++ cryptodev_rsa.rsa_priv_enc = rsa_meth->rsa_priv_enc; ++ cryptodev_rsa.rsa_priv_dec = rsa_meth->rsa_priv_dec; ++ if (cryptodev_asymfeat & CRF_MOD_EXP) { ++ cryptodev_rsa.bn_mod_exp = cryptodev_bn_mod_exp; ++ if (cryptodev_asymfeat & CRF_MOD_EXP_CRT) ++ cryptodev_rsa.rsa_mod_exp = ++ cryptodev_rsa_mod_exp; ++ else ++ cryptodev_rsa.rsa_mod_exp = ++ cryptodev_rsa_nocrt_mod_exp; ++ } ++ } ++ ++ if (ENGINE_set_DSA(engine, &cryptodev_dsa)) { ++ const DSA_METHOD *meth = DSA_OpenSSL(); ++ ++ memcpy(&cryptodev_dsa, meth, sizeof(DSA_METHOD)); ++ if (cryptodev_asymfeat & CRF_DSA_SIGN) ++ cryptodev_dsa.dsa_do_sign = cryptodev_dsa_do_sign; ++ if (cryptodev_asymfeat & CRF_MOD_EXP) { ++ cryptodev_dsa.bn_mod_exp = cryptodev_dsa_bn_mod_exp; ++ cryptodev_dsa.dsa_mod_exp = cryptodev_dsa_dsa_mod_exp; ++ } ++ if (cryptodev_asymfeat & CRF_DSA_VERIFY) ++ cryptodev_dsa.dsa_do_verify = cryptodev_dsa_verify; ++ } ++ ++ if (ENGINE_set_DH(engine, &cryptodev_dh)){ ++ const DH_METHOD *dh_meth = DH_OpenSSL(); ++ ++ cryptodev_dh.generate_key = dh_meth->generate_key; ++ cryptodev_dh.compute_key = dh_meth->compute_key; ++ cryptodev_dh.bn_mod_exp = dh_meth->bn_mod_exp; ++ if (cryptodev_asymfeat & CRF_MOD_EXP) { ++ cryptodev_dh.bn_mod_exp = cryptodev_mod_exp_dh; ++ if (cryptodev_asymfeat & CRF_DH_COMPUTE_KEY) ++ cryptodev_dh.compute_key = ++ cryptodev_dh_compute_key; ++ } ++ } ++ ++ ENGINE_add(engine); ++ ENGINE_free(engine); ++ ERR_clear_error(); + } + +-#endif /* HAVE_CRYPTODEV */ ++#endif /* HAVE_CRYPTODEV */ diff --git a/docs/img/cesa/cesa_block_diagram.png b/docs/img/cesa/cesa_block_diagram.png new file mode 100644 index 0000000..f71a11b Binary files /dev/null and b/docs/img/cesa/cesa_block_diagram.png differ diff --git a/docs/img/cesa/crypto_api_interfaces.png b/docs/img/cesa/crypto_api_interfaces.png new file mode 100644 index 0000000..6391848 Binary files /dev/null and b/docs/img/cesa/crypto_api_interfaces.png differ