the transparent emulation layer)
-ALGORITHM/ADAPTER IMPLEMENTATION
---------------------------------
+ADAPTER IMPLEMENTATION
+----------------------
-When you write a new algorithm driver, you will have to implement a
-function callback `functionality', that gets an i2c_adapter structure
-pointer as its only parameter:
+When you write a new adapter driver, you will have to implement a
+function callback `functionality'. Typical implementations are given
+below.
- struct i2c_algorithm {
- /* Many other things of course; check <linux/i2c.h>! */
- u32 (*functionality) (struct i2c_adapter *);
+A typical SMBus-only adapter would list all the SMBus transactions it
+supports. This example comes from the i2c-piix4 driver:
+
+ static u32 piix4_func(struct i2c_adapter *adapter)
+ {
+ return I2C_FUNC_SMBUS_QUICK | I2C_FUNC_SMBUS_BYTE |
+ I2C_FUNC_SMBUS_BYTE_DATA | I2C_FUNC_SMBUS_WORD_DATA |
+ I2C_FUNC_SMBUS_BLOCK_DATA;
}
-A typically implementation is given below, from i2c-algo-bit.c:
+A typical full-I2C adapter would use the following (from the i2c-pxa
+driver):
- static u32 bit_func(struct i2c_adapter *adap)
+ static u32 i2c_pxa_functionality(struct i2c_adapter *adap)
{
- return I2C_FUNC_SMBUS_EMUL | I2C_FUNC_10BIT_ADDR |
- I2C_FUNC_PROTOCOL_MANGLING;
+ return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL;
}
+I2C_FUNC_SMBUS_EMUL includes all the SMBus transactions (with the
+addition of I2C block transactions) which i2c-core can emulate using
+I2C_FUNC_I2C without any help from the adapter driver. The idea is
+to let the client drivers check for the support of SMBus functions
+without having to care whether the said functions are implemented in
+hardware by the adapter, or emulated in software by i2c-core on top
+of an I2C adapter.
CLIENT CHECKING
Before a client tries to attach to an adapter, or even do tests to check
whether one of the devices it supports is present on an adapter, it should
-check whether the needed functionality is present. There are two functions
-defined which should be used instead of calling the functionality hook
-in the algorithm structure directly:
-
- /* Return the functionality mask */
- extern u32 i2c_get_functionality (struct i2c_adapter *adap);
-
- /* Return 1 if adapter supports everything we need, 0 if not. */
- extern int i2c_check_functionality (struct i2c_adapter *adap, u32 func);
+check whether the needed functionality is present. The typical way to do
+this is (from the lm75 driver):
-This is a typical way to use these functions (from the writing-clients
-document):
- int foo_detect_client(struct i2c_adapter *adapter, int address,
- unsigned short flags, int kind)
+ static int lm75_detect(...)
{
- /* Define needed variables */
-
- /* As the very first action, we check whether the adapter has the
- needed functionality: we need the SMBus read_word_data,
- write_word_data and write_byte functions in this example. */
- if (!i2c_check_functionality(adapter,I2C_FUNC_SMBUS_WORD_DATA |
- I2C_FUNC_SMBUS_WRITE_BYTE))
- goto ERROR0;
-
- /* Now we can do the real detection */
-
- ERROR0:
- /* Return an error */
+ (...)
+ if (!i2c_check_functionality(adapter, I2C_FUNC_SMBUS_BYTE_DATA |
+ I2C_FUNC_SMBUS_WORD_DATA))
+ goto exit;
+ (...)
}
+Here, the lm75 driver checks if the adapter can do both SMBus byte data
+and SMBus word data transactions. If not, then the driver won't work on
+this adapter and there's no point in going on. If the check above is
+successful, then the driver knows that it can call the following
+functions: i2c_smbus_read_byte_data(), i2c_smbus_write_byte_data(),
+i2c_smbus_read_word_data() and i2c_smbus_write_word_data(). As a rule of
+thumb, the functionality constants you test for with
+i2c_check_functionality() should match exactly the i2c_smbus_* functions
+which you driver is calling.
+
+Note that the check above doesn't tell whether the functionalities are
+implemented in hardware by the underlying adapter or emulated in
+software by i2c-core. Client drivers don't have to care about this, as
+i2c-core will transparently implement SMBus transactions on top of I2C
+adapters.
CHECKING THROUGH /DEV
If you try to access an adapter from a userspace program, you will have
to use the /dev interface. You will still have to check whether the
functionality you need is supported, of course. This is done using
-the I2C_FUNCS ioctl. An example, adapted from the lm_sensors i2cdetect
-program, is below:
+the I2C_FUNCS ioctl. An example, adapted from the i2cdetect program, is
+below:
int file;
- if (file = open("/dev/i2c-0",O_RDWR) < 0) {
+ if (file = open("/dev/i2c-0", O_RDWR) < 0) {
/* Some kind of error handling */
exit(1);
}
- if (ioctl(file,I2C_FUNCS,&funcs) < 0) {
+ if (ioctl(file, I2C_FUNCS, &funcs) < 0) {
/* Some kind of error handling */
exit(1);
}
- if (! (funcs & I2C_FUNC_SMBUS_QUICK)) {
+ if (!(funcs & I2C_FUNC_SMBUS_QUICK)) {
/* Oops, the needed functionality (SMBus write_quick function) is
not available! */
exit(1);
SMBus Protocol Summary
======================
+
The following is a summary of the SMBus protocol. It applies to
all revisions of the protocol (1.0, 1.1, and 2.0).
Certain protocol features which are not supported by
Some adapters understand only the SMBus (System Management Bus) protocol,
which is a subset from the I2C protocol. Fortunately, many devices use
only the same subset, which makes it possible to put them on an SMBus.
+
If you write a driver for some I2C device, please try to use the SMBus
commands if at all possible (if the device uses only that subset of the
I2C protocol). This makes it possible to use the device driver on both
translated to I2C on I2C adapters, but plain I2C commands can not be
handled at all on most pure SMBus adapters).
-Below is a list of SMBus commands.
+Below is a list of SMBus protocol operations, and the functions executing
+them. Note that the names used in the SMBus protocol specifications usually
+don't match these function names. For some of the operations which pass a
+single data byte, the functions using SMBus protocol operation names execute
+a different protocol operation entirely.
+
Key to symbols
==============
[..]: Data sent by I2C device, as opposed to data sent by the host adapter.
-SMBus Write Quick
-=================
+SMBus Quick Command: i2c_smbus_write_quick()
+=============================================
This sends a single bit to the device, at the place of the Rd/Wr bit.
-There is no equivalent Read Quick command.
A Addr Rd/Wr [A] P
-SMBus Read Byte
-===============
+SMBus Receive Byte: i2c_smbus_read_byte()
+==========================================
This reads a single byte from a device, without specifying a device
register. Some devices are so simple that this interface is enough; for
S Addr Rd [A] [Data] NA P
-SMBus Write Byte
-================
+SMBus Send Byte: i2c_smbus_write_byte()
+========================================
-This is the reverse of Read Byte: it sends a single byte to a device.
-See Read Byte for more information.
+This operation is the reverse of Receive Byte: it sends a single byte
+to a device. See Receive Byte for more information.
S Addr Wr [A] Data [A] P
-SMBus Read Byte Data
-====================
+SMBus Read Byte: i2c_smbus_read_byte_data()
+============================================
This reads a single byte from a device, from a designated register.
The register is specified through the Comm byte.
S Addr Wr [A] Comm [A] S Addr Rd [A] [Data] NA P
-SMBus Read Word Data
-====================
+SMBus Read Word: i2c_smbus_read_word_data()
+============================================
-This command is very like Read Byte Data; again, data is read from a
+This operation is very like Read Byte; again, data is read from a
device, from a designated register that is specified through the Comm
byte. But this time, the data is a complete word (16 bits).
S Addr Wr [A] Comm [A] S Addr Rd [A] [DataLow] A [DataHigh] NA P
-SMBus Write Byte Data
-=====================
+SMBus Write Byte: i2c_smbus_write_byte_data()
+==============================================
This writes a single byte to a device, to a designated register. The
register is specified through the Comm byte. This is the opposite of
-the Read Byte Data command.
+the Read Byte operation.
S Addr Wr [A] Comm [A] Data [A] P
-SMBus Write Word Data
-=====================
+SMBus Write Word: i2c_smbus_write_word_data()
+==============================================
-This is the opposite operation of the Read Word Data command. 16 bits
+This is the opposite of the Read Word operation. 16 bits
of data is written to a device, to the designated register that is
specified through the Comm byte.
S Addr Rd [A] [DataLow] A [DataHigh] NA P
-SMBus Block Read
-================
+SMBus Block Read: i2c_smbus_read_block_data()
+==============================================
This command reads a block of up to 32 bytes from a device, from a
designated register that is specified through the Comm byte. The amount
S Addr Rd [A] [Count] A [Data] A [Data] A ... A [Data] NA P
-SMBus Block Write
-=================
+SMBus Block Write: i2c_smbus_write_block_data()
+================================================
The opposite of the Block Read command, this writes up to 32 bytes to
a device, to a designated register that is specified through the
S Addr Wr [A] Comm [A] Count [A] Data [A] Data [A] ... [A] Data [A] P
-SMBus Block Process Call
-========================
+SMBus Block Write - Block Read Process Call
+===========================================
-SMBus Block Process Call was introduced in Revision 2.0 of the specification.
+SMBus Block Write - Block Read Process Call was introduced in
+Revision 2.0 of the specification.
This command selects a device register (through the Comm byte), sends
1 to 31 bytes of data to it, and reads 1 to 31 bytes of data in return.
Packet Error Checking (PEC)
===========================
+
Packet Error Checking was introduced in Revision 1.1 of the specification.
-PEC adds a CRC-8 error-checking byte to all transfers.
+PEC adds a CRC-8 error-checking byte to transfers using it, immediately
+before the terminating STOP.
Address Resolution Protocol (ARP)
=================================
+
The Address Resolution Protocol was introduced in Revision 2.0 of
the specification. It is a higher-layer protocol which uses the
messages above.
I2C Block Transactions
======================
+
The following I2C block transactions are supported by the
SMBus layer and are described here for completeness.
+They are *NOT* defined by the SMBus specification.
+
I2C block transactions do not limit the number of bytes transferred
but the SMBus layer places a limit of 32 bytes.
-I2C Block Read
-==============
+I2C Block Read: i2c_smbus_read_i2c_block_data()
+================================================
This command reads a block of bytes from a device, from a
designated register that is specified through the Comm byte.
S Addr Rd [A] [Data] A [Data] A ... A [Data] NA P
-I2C Block Write
-===============
+I2C Block Write: i2c_smbus_write_i2c_block_data()
+==================================================
The opposite of the Block Read command, this writes bytes to
a device, to a designated register that is specified through the
supported as they are indistinguishable from data.
S Addr Wr [A] Comm [A] Data [A] Data [A] ... [A] Data [A] P
-
-
Enable logging of debug information in case of ccw device timeouts.
-
-* cio_msg = yes | no
-
- Determines whether information on found devices and sensed device
- characteristics should be shown during startup or when new devices are
- found, i. e. messages of the types "Detected device 0.0.4711 on subchannel
- 0.0.0042" and "SenseID: Device 0.0.4711 reports: ...".
-
- Default is off.
-
-
* cio_ignore = {all} |
{<device> | <range of devices>} |
{!<device> | !<range of devices>}
VERSION = 2
PATCHLEVEL = 6
SUBLEVEL = 26
-EXTRAVERSION = -rc1
+EXTRAVERSION = -rc2
NAME = Funky Weasel is Jiggy wit it
# *DOCUMENTATION*
const int port = gpio >> 3;
const int port_mask = 1 << (gpio & 7);
- gpio_direction_output(gpio, gpio_get_value(gpio));
+ gpio_direction_input(gpio);
switch (type) {
case IRQT_RISING:
#if 0
#define handle_irq handle_level_irq
#else
-void handle_prio_irq(unsigned int irq, struct irq_desc *desc)
+static void handle_prio_irq(unsigned int irq, struct irq_desc *desc)
{
unsigned int cpu = smp_processor_id();
struct irqaction *action;
spin_lock(&desc->lock);
- if (unlikely(desc->status & IRQ_INPROGRESS))
- goto out_unlock;
+ BUG_ON(desc->status & IRQ_INPROGRESS);
desc->status &= ~(IRQ_REPLAY | IRQ_WAITING);
kstat_cpu(cpu).irqs[irq]++;
action = desc->action;
if (unlikely(!action || (desc->status & IRQ_DISABLED)))
- goto out_unlock;
+ goto out_mask;
desc->status |= IRQ_INPROGRESS;
spin_unlock(&desc->lock);
action_ret = handle_IRQ_event(irq, action);
+ /* XXX: There is no direct way to access noirqdebug, so check
+ * unconditionally for spurious irqs...
+ * Maybe this function should go to kernel/irq/chip.c? */
+ note_interrupt(irq, desc, action_ret);
+
spin_lock(&desc->lock);
desc->status &= ~IRQ_INPROGRESS;
- if (!(desc->status & IRQ_DISABLED) && desc->chip->ack)
- desc->chip->ack(irq);
-out_unlock:
+ if (desc->status & IRQ_DISABLED)
+out_mask:
+ desc->chip->mask(irq);
+
+ /* ack unconditionally to unmask lower prio irqs */
+ desc->chip->ack(irq);
+
spin_unlock(&desc->lock);
}
#define handle_irq handle_prio_irq
* Non-CPU Masters address decoding --
* Unlike the CPU, we setup the access from Orion's master interfaces to DDR
* banks only (the typical use case).
- * Setup access for each master to DDR is issued by common.c.
- *
- * Note: although orion_setbits() and orion_clrbits() are not atomic
- * no locking is necessary here since code in this file is only called
- * at boot time when there is no concurrency issues.
+ * Setup access for each master to DDR is issued by platform device setup.
*/
/*
#define TARGET_DEV_BUS 1
#define TARGET_PCI 3
#define TARGET_PCIE 4
-#define ATTR_DDR_CS(n) (((n) ==0) ? 0xe : \
- ((n) == 1) ? 0xd : \
- ((n) == 2) ? 0xb : \
- ((n) == 3) ? 0x7 : 0xf)
#define ATTR_PCIE_MEM 0x59
#define ATTR_PCIE_IO 0x51
#define ATTR_PCIE_WA 0x79
#define ATTR_DEV_CS1 0x1d
#define ATTR_DEV_CS2 0x1b
#define ATTR_DEV_BOOT 0xf
-#define WIN_EN 1
/*
* Helpers to get DDR bank info
*/
-#define DDR_BASE_CS(n) ORION5X_DDR_REG(0x1500 + ((n) * 8))
-#define DDR_SIZE_CS(n) ORION5X_DDR_REG(0x1504 + ((n) * 8))
-#define DDR_MAX_CS 4
-#define DDR_REG_TO_SIZE(reg) (((reg) | 0xffffff) + 1)
-#define DDR_REG_TO_BASE(reg) ((reg) & 0xff000000)
-#define DDR_BANK_EN 1
+#define DDR_BASE_CS(n) ORION5X_DDR_REG(0x1500 + ((n) << 3))
+#define DDR_SIZE_CS(n) ORION5X_DDR_REG(0x1504 + ((n) << 3))
/*
* CPU Address Decode Windows registers
#define CPU_WIN_REMAP_LO(n) ORION5X_BRIDGE_REG(0x008 | ((n) << 4))
#define CPU_WIN_REMAP_HI(n) ORION5X_BRIDGE_REG(0x00c | ((n) << 4))
-/*
- * Gigabit Ethernet Address Decode Windows registers
- */
-#define ETH_WIN_BASE(win) ORION5X_ETH_REG(0x200 + ((win) * 8))
-#define ETH_WIN_SIZE(win) ORION5X_ETH_REG(0x204 + ((win) * 8))
-#define ETH_WIN_REMAP(win) ORION5X_ETH_REG(0x280 + ((win) * 4))
-#define ETH_WIN_EN ORION5X_ETH_REG(0x290)
-#define ETH_WIN_PROT ORION5X_ETH_REG(0x294)
-#define ETH_MAX_WIN 6
-#define ETH_MAX_REMAP_WIN 4
-
struct mbus_dram_target_info orion5x_mbus_dram_info;
{
setup_cpu_win(7, base, size, TARGET_PCIE, ATTR_PCIE_WA, -1);
}
-
-void __init orion5x_setup_eth_wins(void)
-{
- int i;
-
- /*
- * First, disable and clear windows
- */
- for (i = 0; i < ETH_MAX_WIN; i++) {
- orion5x_write(ETH_WIN_BASE(i), 0);
- orion5x_write(ETH_WIN_SIZE(i), 0);
- orion5x_setbits(ETH_WIN_EN, 1 << i);
- orion5x_clrbits(ETH_WIN_PROT, 0x3 << (i * 2));
- if (i < ETH_MAX_REMAP_WIN)
- orion5x_write(ETH_WIN_REMAP(i), 0);
- }
-
- /*
- * Setup windows for DDR banks.
- */
- for (i = 0; i < DDR_MAX_CS; i++) {
- u32 base, size;
- size = orion5x_read(DDR_SIZE_CS(i));
- base = orion5x_read(DDR_BASE_CS(i));
- if (size & DDR_BANK_EN) {
- base = DDR_REG_TO_BASE(base);
- size = DDR_REG_TO_SIZE(size);
- orion5x_write(ETH_WIN_SIZE(i), (size-1) & 0xffff0000);
- orion5x_write(ETH_WIN_BASE(i), (base & 0xffff0000) |
- (ATTR_DDR_CS(i) << 8) |
- TARGET_DDR);
- orion5x_clrbits(ETH_WIN_EN, 1 << i);
- orion5x_setbits(ETH_WIN_PROT, 0x3 << (i * 2));
- }
- }
-}
* (The Orion and Discovery (MV643xx) families use the same Ethernet driver)
****************************************************************************/
+struct mv643xx_eth_shared_platform_data orion5x_eth_shared_data = {
+ .dram = &orion5x_mbus_dram_info,
+ .t_clk = ORION5X_TCLK,
+};
+
static struct resource orion5x_eth_shared_resources[] = {
{
.start = ORION5X_ETH_PHYS_BASE + 0x2000,
static struct platform_device orion5x_eth_shared = {
.name = MV643XX_ETH_SHARED_NAME,
.id = 0,
+ .dev = {
+ .platform_data = &orion5x_eth_shared_data,
+ },
.num_resources = 1,
.resource = orion5x_eth_shared_resources,
};
void __init orion5x_eth_init(struct mv643xx_eth_platform_data *eth_data)
{
+ eth_data->shared = &orion5x_eth_shared;
orion5x_eth.dev.platform_data = eth_data;
+
platform_device_register(&orion5x_eth_shared);
platform_device_register(&orion5x_eth);
}
* Setup Orion address map
*/
orion5x_setup_cpu_mbus_bridge();
- orion5x_setup_eth_wins();
/*
* Register devices.
void orion5x_setup_dev1_win(u32 base, u32 size);
void orion5x_setup_dev2_win(u32 base, u32 size);
void orion5x_setup_pcie_wa_win(u32 base, u32 size);
-void orion5x_setup_eth_wins(void);
/*
* Shared code used internally by other Orion core functions.
# Common support (must be linked before board specific support)
obj-y += clock.o devices.o generic.o irq.o dma.o \
time.o gpio.o
+obj-$(CONFIG_PM) += pm.o sleep.o standby.o
+obj-$(CONFIG_CPU_FREQ) += cpu-pxa.o
+
+# Generic drivers that other drivers may depend upon
+obj-$(CONFIG_PXA_SSP) += ssp.o
+
+# SoC-specific code
obj-$(CONFIG_PXA25x) += mfp-pxa2xx.o pxa25x.o
obj-$(CONFIG_PXA27x) += mfp-pxa2xx.o pxa27x.o
obj-$(CONFIG_PXA3xx) += mfp-pxa3xx.o pxa3xx.o smemc.o
obj-$(CONFIG_LEDS) += $(led-y)
-# Misc features
-obj-$(CONFIG_PM) += pm.o sleep.o standby.o
-obj-$(CONFIG_CPU_FREQ) += cpu-pxa.o
-obj-$(CONFIG_PXA_SSP) += ssp.o
-
ifeq ($(CONFIG_PCI),y)
obj-$(CONFIG_MACH_ARMCORE) += cm-x270-pci.o
endif
static void corgi_poweroff(void)
{
- RCSR = RCSR_HWR | RCSR_WDR | RCSR_SMR | RCSR_GPR;
-
if (!machine_is_corgi())
/* Green LED off tells the bootloader to halt */
reset_scoop_gpio(&corgiscoop_device.dev, CORGI_SCP_LED_GREEN);
static void corgi_restart(char mode)
{
- RCSR = RCSR_HWR | RCSR_WDR | RCSR_SMR | RCSR_GPR;
-
if (!machine_is_corgi())
/* Green LED on tells the bootloader to reboot */
set_scoop_gpio(&corgiscoop_device.dev, CORGI_SCP_LED_GREEN);
#define freq_debug 0
#endif
+static unsigned int pxa27x_maxfreq;
+module_param(pxa27x_maxfreq, uint, 0);
+MODULE_PARM_DESC(pxa27x_maxfreq, "Set the pxa27x maxfreq in MHz"
+ "(typically 624=>pxa270, 416=>pxa271, 520=>pxa272)");
+
typedef struct {
unsigned int khz;
unsigned int membus;
unsigned int cccr;
unsigned int div2;
+ unsigned int cclkcfg;
} pxa_freqs_t;
/* Define the refresh period in mSec for the SDRAM and the number of rows */
-#define SDRAM_TREF 64 /* standard 64ms SDRAM */
-#define SDRAM_ROWS 4096 /* 64MB=8192 32MB=4096 */
-#define MDREFR_DRI(x) (((x) * SDRAM_TREF) / (SDRAM_ROWS * 32))
-
-#define CCLKCFG_TURBO 0x1
-#define CCLKCFG_FCS 0x2
-#define PXA25x_MIN_FREQ 99500
-#define PXA25x_MAX_FREQ 398100
-#define MDREFR_DB2_MASK (MDREFR_K2DB2 | MDREFR_K1DB2)
-#define MDREFR_DRI_MASK 0xFFF
+#define SDRAM_TREF 64 /* standard 64ms SDRAM */
+#define SDRAM_ROWS 4096 /* 64MB=8192 32MB=4096 */
+#define CCLKCFG_TURBO 0x1
+#define CCLKCFG_FCS 0x2
+#define CCLKCFG_HALFTURBO 0x4
+#define CCLKCFG_FASTBUS 0x8
+#define MDREFR_DB2_MASK (MDREFR_K2DB2 | MDREFR_K1DB2)
+#define MDREFR_DRI_MASK 0xFFF
+/*
+ * PXA255 definitions
+ */
/* Use the run mode frequencies for the CPUFREQ_POLICY_PERFORMANCE policy */
+#define CCLKCFG CCLKCFG_TURBO | CCLKCFG_FCS
+
static pxa_freqs_t pxa255_run_freqs[] =
{
- /* CPU MEMBUS CCCR DIV2*/
- { 99500, 99500, 0x121, 1}, /* run= 99, turbo= 99, PXbus=50, SDRAM=50 */
- {132700, 132700, 0x123, 1}, /* run=133, turbo=133, PXbus=66, SDRAM=66 */
- {199100, 99500, 0x141, 0}, /* run=199, turbo=199, PXbus=99, SDRAM=99 */
- {265400, 132700, 0x143, 1}, /* run=265, turbo=265, PXbus=133, SDRAM=66 */
- {331800, 165900, 0x145, 1}, /* run=331, turbo=331, PXbus=166, SDRAM=83 */
- {398100, 99500, 0x161, 0}, /* run=398, turbo=398, PXbus=196, SDRAM=99 */
- {0,}
+ /* CPU MEMBUS CCCR DIV2 CCLKCFG run turbo PXbus SDRAM */
+ { 99500, 99500, 0x121, 1, CCLKCFG}, /* 99, 99, 50, 50 */
+ {132700, 132700, 0x123, 1, CCLKCFG}, /* 133, 133, 66, 66 */
+ {199100, 99500, 0x141, 0, CCLKCFG}, /* 199, 199, 99, 99 */
+ {265400, 132700, 0x143, 1, CCLKCFG}, /* 265, 265, 133, 66 */
+ {331800, 165900, 0x145, 1, CCLKCFG}, /* 331, 331, 166, 83 */
+ {398100, 99500, 0x161, 0, CCLKCFG}, /* 398, 398, 196, 99 */
};
-#define NUM_RUN_FREQS ARRAY_SIZE(pxa255_run_freqs)
-
-static struct cpufreq_frequency_table pxa255_run_freq_table[NUM_RUN_FREQS+1];
/* Use the turbo mode frequencies for the CPUFREQ_POLICY_POWERSAVE policy */
static pxa_freqs_t pxa255_turbo_freqs[] =
{
- /* CPU MEMBUS CCCR DIV2*/
- { 99500, 99500, 0x121, 1}, /* run=99, turbo= 99, PXbus=50, SDRAM=50 */
- {199100, 99500, 0x221, 0}, /* run=99, turbo=199, PXbus=50, SDRAM=99 */
- {298500, 99500, 0x321, 0}, /* run=99, turbo=287, PXbus=50, SDRAM=99 */
- {298600, 99500, 0x1c1, 0}, /* run=199, turbo=287, PXbus=99, SDRAM=99 */
- {398100, 99500, 0x241, 0}, /* run=199, turbo=398, PXbus=99, SDRAM=99 */
- {0,}
+ /* CPU MEMBUS CCCR DIV2 CCLKCFG run turbo PXbus SDRAM */
+ { 99500, 99500, 0x121, 1, CCLKCFG}, /* 99, 99, 50, 50 */
+ {199100, 99500, 0x221, 0, CCLKCFG}, /* 99, 199, 50, 99 */
+ {298500, 99500, 0x321, 0, CCLKCFG}, /* 99, 287, 50, 99 */
+ {298600, 99500, 0x1c1, 0, CCLKCFG}, /* 199, 287, 99, 99 */
+ {398100, 99500, 0x241, 0, CCLKCFG}, /* 199, 398, 99, 99 */
+};
+
+#define NUM_PXA25x_RUN_FREQS ARRAY_SIZE(pxa255_run_freqs)
+#define NUM_PXA25x_TURBO_FREQS ARRAY_SIZE(pxa255_turbo_freqs)
+
+static struct cpufreq_frequency_table
+ pxa255_run_freq_table[NUM_PXA25x_RUN_FREQS+1];
+static struct cpufreq_frequency_table
+ pxa255_turbo_freq_table[NUM_PXA25x_TURBO_FREQS+1];
+
+/*
+ * PXA270 definitions
+ *
+ * For the PXA27x:
+ * Control variables are A, L, 2N for CCCR; B, HT, T for CLKCFG.
+ *
+ * A = 0 => memory controller clock from table 3-7,
+ * A = 1 => memory controller clock = system bus clock
+ * Run mode frequency = 13 MHz * L
+ * Turbo mode frequency = 13 MHz * L * N
+ * System bus frequency = 13 MHz * L / (B + 1)
+ *
+ * In CCCR:
+ * A = 1
+ * L = 16 oscillator to run mode ratio
+ * 2N = 6 2 * (turbo mode to run mode ratio)
+ *
+ * In CCLKCFG:
+ * B = 1 Fast bus mode
+ * HT = 0 Half-Turbo mode
+ * T = 1 Turbo mode
+ *
+ * For now, just support some of the combinations in table 3-7 of
+ * PXA27x Processor Family Developer's Manual to simplify frequency
+ * change sequences.
+ */
+#define PXA27x_CCCR(A, L, N2) (A << 25 | N2 << 7 | L)
+#define CCLKCFG2(B, HT, T) \
+ (CCLKCFG_FCS | \
+ ((B) ? CCLKCFG_FASTBUS : 0) | \
+ ((HT) ? CCLKCFG_HALFTURBO : 0) | \
+ ((T) ? CCLKCFG_TURBO : 0))
+
+static pxa_freqs_t pxa27x_freqs[] = {
+ {104000, 104000, PXA27x_CCCR(1, 8, 2), 0, CCLKCFG2(1, 0, 1)},
+ {156000, 104000, PXA27x_CCCR(1, 8, 6), 0, CCLKCFG2(1, 1, 1)},
+ {208000, 208000, PXA27x_CCCR(0, 16, 2), 1, CCLKCFG2(0, 0, 1)},
+ {312000, 208000, PXA27x_CCCR(1, 16, 3), 1, CCLKCFG2(1, 0, 1)},
+ {416000, 208000, PXA27x_CCCR(1, 16, 4), 1, CCLKCFG2(1, 0, 1)},
+ {520000, 208000, PXA27x_CCCR(1, 16, 5), 1, CCLKCFG2(1, 0, 1)},
+ {624000, 208000, PXA27x_CCCR(1, 16, 6), 1, CCLKCFG2(1, 0, 1)}
};
-#define NUM_TURBO_FREQS ARRAY_SIZE(pxa255_turbo_freqs)
-static struct cpufreq_frequency_table pxa255_turbo_freq_table[NUM_TURBO_FREQS+1];
+#define NUM_PXA27x_FREQS ARRAY_SIZE(pxa27x_freqs)
+static struct cpufreq_frequency_table
+ pxa27x_freq_table[NUM_PXA27x_FREQS+1];
extern unsigned get_clk_frequency_khz(int info);
+static void find_freq_tables(struct cpufreq_policy *policy,
+ struct cpufreq_frequency_table **freq_table,
+ pxa_freqs_t **pxa_freqs)
+{
+ if (cpu_is_pxa25x()) {
+ if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) {
+ *pxa_freqs = pxa255_run_freqs;
+ *freq_table = pxa255_run_freq_table;
+ } else if (policy->policy == CPUFREQ_POLICY_POWERSAVE) {
+ *pxa_freqs = pxa255_turbo_freqs;
+ *freq_table = pxa255_turbo_freq_table;
+ } else {
+ printk("CPU PXA: Unknown policy found. "
+ "Using CPUFREQ_POLICY_PERFORMANCE\n");
+ *pxa_freqs = pxa255_run_freqs;
+ *freq_table = pxa255_run_freq_table;
+ }
+ }
+ if (cpu_is_pxa27x()) {
+ *pxa_freqs = pxa27x_freqs;
+ *freq_table = pxa27x_freq_table;
+ }
+}
+
+static void pxa27x_guess_max_freq(void)
+{
+ if (!pxa27x_maxfreq) {
+ pxa27x_maxfreq = 416000;
+ printk(KERN_INFO "PXA CPU 27x max frequency not defined "
+ "(pxa27x_maxfreq), assuming pxa271 with %dkHz maxfreq\n",
+ pxa27x_maxfreq);
+ } else {
+ pxa27x_maxfreq *= 1000;
+ }
+}
+
+static u32 mdrefr_dri(unsigned int freq)
+{
+ u32 dri = 0;
+
+ if (cpu_is_pxa25x())
+ dri = ((freq * SDRAM_TREF) / (SDRAM_ROWS * 32));
+ if (cpu_is_pxa27x())
+ dri = ((freq * SDRAM_TREF) / (SDRAM_ROWS - 31)) / 32;
+ return dri;
+}
+
/* find a valid frequency point */
static int pxa_verify_policy(struct cpufreq_policy *policy)
{
struct cpufreq_frequency_table *pxa_freqs_table;
+ pxa_freqs_t *pxa_freqs;
int ret;
- if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) {
- pxa_freqs_table = pxa255_run_freq_table;
- } else if (policy->policy == CPUFREQ_POLICY_POWERSAVE) {
- pxa_freqs_table = pxa255_turbo_freq_table;
- } else {
- printk("CPU PXA: Unknown policy found. "
- "Using CPUFREQ_POLICY_PERFORMANCE\n");
- pxa_freqs_table = pxa255_run_freq_table;
- }
-
+ find_freq_tables(policy, &pxa_freqs_table, &pxa_freqs);
ret = cpufreq_frequency_table_verify(policy, pxa_freqs_table);
if (freq_debug)
pr_debug("Verified CPU policy: %dKhz min to %dKhz max\n",
- policy->min, policy->max);
+ policy->min, policy->max);
return ret;
}
+static unsigned int pxa_cpufreq_get(unsigned int cpu)
+{
+ return get_clk_frequency_khz(0);
+}
+
static int pxa_set_target(struct cpufreq_policy *policy,
- unsigned int target_freq,
- unsigned int relation)
+ unsigned int target_freq,
+ unsigned int relation)
{
struct cpufreq_frequency_table *pxa_freqs_table;
pxa_freqs_t *pxa_freq_settings;
struct cpufreq_freqs freqs;
unsigned int idx;
unsigned long flags;
- unsigned int unused, preset_mdrefr, postset_mdrefr;
- void *ramstart = phys_to_virt(0xa0000000);
+ unsigned int new_freq_cpu, new_freq_mem;
+ unsigned int unused, preset_mdrefr, postset_mdrefr, cclkcfg;
/* Get the current policy */
- if (policy->policy == CPUFREQ_POLICY_PERFORMANCE) {
- pxa_freq_settings = pxa255_run_freqs;
- pxa_freqs_table = pxa255_run_freq_table;
- } else if (policy->policy == CPUFREQ_POLICY_POWERSAVE) {
- pxa_freq_settings = pxa255_turbo_freqs;
- pxa_freqs_table = pxa255_turbo_freq_table;
- } else {
- printk("CPU PXA: Unknown policy found. "
- "Using CPUFREQ_POLICY_PERFORMANCE\n");
- pxa_freq_settings = pxa255_run_freqs;
- pxa_freqs_table = pxa255_run_freq_table;
- }
+ find_freq_tables(policy, &pxa_freqs_table, &pxa_freq_settings);
/* Lookup the next frequency */
if (cpufreq_frequency_table_target(policy, pxa_freqs_table,
- target_freq, relation, &idx)) {
+ target_freq, relation, &idx)) {
return -EINVAL;
}
+ new_freq_cpu = pxa_freq_settings[idx].khz;
+ new_freq_mem = pxa_freq_settings[idx].membus;
freqs.old = policy->cur;
- freqs.new = pxa_freq_settings[idx].khz;
+ freqs.new = new_freq_cpu;
freqs.cpu = policy->cpu;
if (freq_debug)
- pr_debug(KERN_INFO "Changing CPU frequency to %d Mhz, (SDRAM %d Mhz)\n",
- freqs.new / 1000, (pxa_freq_settings[idx].div2) ?
- (pxa_freq_settings[idx].membus / 2000) :
- (pxa_freq_settings[idx].membus / 1000));
+ pr_debug(KERN_INFO "Changing CPU frequency to %d Mhz, "
+ "(SDRAM %d Mhz)\n",
+ freqs.new / 1000, (pxa_freq_settings[idx].div2) ?
+ (new_freq_mem / 2000) : (new_freq_mem / 1000));
/*
* Tell everyone what we're about to do...
cpufreq_notify_transition(&freqs, CPUFREQ_PRECHANGE);
/* Calculate the next MDREFR. If we're slowing down the SDRAM clock
- * we need to preset the smaller DRI before the change. If we're speeding
- * up we need to set the larger DRI value after the change.
+ * we need to preset the smaller DRI before the change. If we're
+ * speeding up we need to set the larger DRI value after the change.
*/
preset_mdrefr = postset_mdrefr = MDREFR;
- if ((MDREFR & MDREFR_DRI_MASK) > MDREFR_DRI(pxa_freq_settings[idx].membus)) {
- preset_mdrefr = (preset_mdrefr & ~MDREFR_DRI_MASK) |
- MDREFR_DRI(pxa_freq_settings[idx].membus);
+ if ((MDREFR & MDREFR_DRI_MASK) > mdrefr_dri(new_freq_mem)) {
+ preset_mdrefr = (preset_mdrefr & ~MDREFR_DRI_MASK);
+ preset_mdrefr |= mdrefr_dri(new_freq_mem);
}
- postset_mdrefr = (postset_mdrefr & ~MDREFR_DRI_MASK) |
- MDREFR_DRI(pxa_freq_settings[idx].membus);
+ postset_mdrefr =
+ (postset_mdrefr & ~MDREFR_DRI_MASK) | mdrefr_dri(new_freq_mem);
/* If we're dividing the memory clock by two for the SDRAM clock, this
* must be set prior to the change. Clearing the divide must be done
local_irq_save(flags);
- /* Set new the CCCR */
+ /* Set new the CCCR and prepare CCLKCFG */
CCCR = pxa_freq_settings[idx].cccr;
+ cclkcfg = pxa_freq_settings[idx].cclkcfg;
asm volatile(" \n\
ldr r4, [%1] /* load MDREFR */ \n\
b 2f \n\
- .align 5 \n\
+ .align 5 \n\
1: \n\
- str %4, [%1] /* preset the MDREFR */ \n\
+ str %3, [%1] /* preset the MDREFR */ \n\
mcr p14, 0, %2, c6, c0, 0 /* set CCLKCFG[FCS] */ \n\
- str %5, [%1] /* postset the MDREFR */ \n\
+ str %4, [%1] /* postset the MDREFR */ \n\
\n\
b 3f \n\
2: b 1b \n\
3: nop \n\
"
- : "=&r" (unused)
- : "r" (&MDREFR), "r" (CCLKCFG_TURBO|CCLKCFG_FCS), "r" (ramstart),
- "r" (preset_mdrefr), "r" (postset_mdrefr)
- : "r4", "r5");
+ : "=&r" (unused)
+ : "r" (&MDREFR), "r" (cclkcfg),
+ "r" (preset_mdrefr), "r" (postset_mdrefr)
+ : "r4", "r5");
local_irq_restore(flags);
/*
return 0;
}
-static unsigned int pxa_cpufreq_get(unsigned int cpu)
-{
- return get_clk_frequency_khz(0);
-}
-
-static int pxa_cpufreq_init(struct cpufreq_policy *policy)
+static __init int pxa_cpufreq_init(struct cpufreq_policy *policy)
{
int i;
+ unsigned int freq;
+
+ /* try to guess pxa27x cpu */
+ if (cpu_is_pxa27x())
+ pxa27x_guess_max_freq();
/* set default policy and cpuinfo */
policy->governor = CPUFREQ_DEFAULT_GOVERNOR;
- policy->policy = CPUFREQ_POLICY_PERFORMANCE;
- policy->cpuinfo.max_freq = PXA25x_MAX_FREQ;
- policy->cpuinfo.min_freq = PXA25x_MIN_FREQ;
+ if (cpu_is_pxa25x())
+ policy->policy = CPUFREQ_POLICY_PERFORMANCE;
policy->cpuinfo.transition_latency = 1000; /* FIXME: 1 ms, assumed */
- policy->cur = get_clk_frequency_khz(0); /* current freq */
+ policy->cur = get_clk_frequency_khz(0); /* current freq */
policy->min = policy->max = policy->cur;
- /* Generate the run cpufreq_frequency_table struct */
- for (i = 0; i < NUM_RUN_FREQS; i++) {
+ /* Generate pxa25x the run cpufreq_frequency_table struct */
+ for (i = 0; i < NUM_PXA25x_RUN_FREQS; i++) {
pxa255_run_freq_table[i].frequency = pxa255_run_freqs[i].khz;
pxa255_run_freq_table[i].index = i;
}
-
pxa255_run_freq_table[i].frequency = CPUFREQ_TABLE_END;
- /* Generate the turbo cpufreq_frequency_table struct */
- for (i = 0; i < NUM_TURBO_FREQS; i++) {
- pxa255_turbo_freq_table[i].frequency = pxa255_turbo_freqs[i].khz;
+
+ /* Generate pxa25x the turbo cpufreq_frequency_table struct */
+ for (i = 0; i < NUM_PXA25x_TURBO_FREQS; i++) {
+ pxa255_turbo_freq_table[i].frequency =
+ pxa255_turbo_freqs[i].khz;
pxa255_turbo_freq_table[i].index = i;
}
pxa255_turbo_freq_table[i].frequency = CPUFREQ_TABLE_END;
+ /* Generate the pxa27x cpufreq_frequency_table struct */
+ for (i = 0; i < NUM_PXA27x_FREQS; i++) {
+ freq = pxa27x_freqs[i].khz;
+ if (freq > pxa27x_maxfreq)
+ break;
+ pxa27x_freq_table[i].frequency = freq;
+ pxa27x_freq_table[i].index = i;
+ }
+ pxa27x_freq_table[i].frequency = CPUFREQ_TABLE_END;
+
+ /*
+ * Set the policy's minimum and maximum frequencies from the tables
+ * just constructed. This sets cpuinfo.mxx_freq, min and max.
+ */
+ if (cpu_is_pxa25x())
+ cpufreq_frequency_table_cpuinfo(policy, pxa255_run_freq_table);
+ else if (cpu_is_pxa27x())
+ cpufreq_frequency_table_cpuinfo(policy, pxa27x_freq_table);
+
printk(KERN_INFO "PXA CPU frequency change support initialized\n");
return 0;
.target = pxa_set_target,
.init = pxa_cpufreq_init,
.get = pxa_cpufreq_get,
- .name = "PXA25x",
+ .name = "PXA2xx",
};
static int __init pxa_cpu_init(void)
{
int ret = -ENODEV;
- if (cpu_is_pxa25x())
+ if (cpu_is_pxa25x() || cpu_is_pxa27x())
ret = cpufreq_register_driver(&pxa_cpufreq_driver);
return ret;
}
static void __exit pxa_cpu_exit(void)
{
- if (cpu_is_pxa25x())
- cpufreq_unregister_driver(&pxa_cpufreq_driver);
+ cpufreq_unregister_driver(&pxa_cpufreq_driver);
}
-MODULE_AUTHOR ("Intrinsyc Software Inc.");
-MODULE_DESCRIPTION ("CPU frequency changing driver for the PXA architecture");
+MODULE_AUTHOR("Intrinsyc Software Inc.");
+MODULE_DESCRIPTION("CPU frequency changing driver for the PXA architecture");
MODULE_LICENSE("GPL");
module_init(pxa_cpu_init);
module_exit(pxa_cpu_exit);
.cmap_inverse = 0,
.cmap_static = 0,
.lcd_conn = LCD_COLOR_DSTN_16BPP | LCD_PCLK_EDGE_FALL |
- LCD_AC_BIAS_FREQ(255);
+ LCD_AC_BIAS_FREQ(255),
};
#define MMC_POLL_RATE msecs_to_jiffies(1000)
if (state != PM_SUSPEND_STANDBY) {
pxa_cpu_pm_fns->save(sleep_save);
/* before sleeping, calculate and save a checksum */
- for (i = 0; i < pxa_cpu_pm_fns->save_size - 1; i++)
+ for (i = 0; i < pxa_cpu_pm_fns->save_count - 1; i++)
sleep_save_checksum += sleep_save[i];
}
- /* Clear reset status */
- RCSR = RCSR_HWR | RCSR_WDR | RCSR_SMR | RCSR_GPR;
-
/* *** go zzz *** */
pxa_cpu_pm_fns->enter(state);
cpu_init();
if (state != PM_SUSPEND_STANDBY) {
/* after sleeping, validate the checksum */
- for (i = 0; i < pxa_cpu_pm_fns->save_size - 1; i++)
+ for (i = 0; i < pxa_cpu_pm_fns->save_count - 1; i++)
checksum += sleep_save[i];
/* if invalid, display message and wait for a hardware reset */
return -EINVAL;
}
- sleep_save = kmalloc(pxa_cpu_pm_fns->save_size, GFP_KERNEL);
+ sleep_save = kmalloc(pxa_cpu_pm_fns->save_count * sizeof(unsigned long),
+ GFP_KERNEL);
if (!sleep_save) {
printk(KERN_ERR "failed to alloc memory for pm save\n");
return -ENOMEM;
static void poodle_poweroff(void)
{
- RCSR = RCSR_HWR | RCSR_WDR | RCSR_SMR | RCSR_GPR;
arm_machine_restart('h');
}
static void poodle_restart(char mode)
{
- RCSR = RCSR_HWR | RCSR_WDR | RCSR_SMR | RCSR_GPR;
arm_machine_restart('h');
}
* More ones like CP and general purpose register values are preserved
* with the stack pointer in sleep.S.
*/
-enum { SLEEP_SAVE_START = 0,
-
- SLEEP_SAVE_PGSR0, SLEEP_SAVE_PGSR1, SLEEP_SAVE_PGSR2,
+enum { SLEEP_SAVE_PGSR0, SLEEP_SAVE_PGSR1, SLEEP_SAVE_PGSR2,
SLEEP_SAVE_GAFR0_L, SLEEP_SAVE_GAFR0_U,
SLEEP_SAVE_GAFR1_L, SLEEP_SAVE_GAFR1_U,
SLEEP_SAVE_CKEN,
- SLEEP_SAVE_SIZE
+ SLEEP_SAVE_COUNT
};
static void pxa25x_cpu_pm_enter(suspend_state_t state)
{
+ /* Clear reset status */
+ RCSR = RCSR_HWR | RCSR_WDR | RCSR_SMR | RCSR_GPR;
+
switch (state) {
case PM_SUSPEND_MEM:
/* set resume return address */
}
static struct pxa_cpu_pm_fns pxa25x_cpu_pm_fns = {
- .save_size = SLEEP_SAVE_SIZE,
+ .save_count = SLEEP_SAVE_COUNT,
.valid = suspend_valid_only_mem,
.save = pxa25x_cpu_pm_save,
.restore = pxa25x_cpu_pm_restore,
* More ones like CP and general purpose register values are preserved
* with the stack pointer in sleep.S.
*/
-enum { SLEEP_SAVE_START = 0,
-
- SLEEP_SAVE_PGSR0, SLEEP_SAVE_PGSR1, SLEEP_SAVE_PGSR2, SLEEP_SAVE_PGSR3,
+enum { SLEEP_SAVE_PGSR0, SLEEP_SAVE_PGSR1, SLEEP_SAVE_PGSR2, SLEEP_SAVE_PGSR3,
SLEEP_SAVE_GAFR0_L, SLEEP_SAVE_GAFR0_U,
SLEEP_SAVE_GAFR1_L, SLEEP_SAVE_GAFR1_U,
SLEEP_SAVE_PWER, SLEEP_SAVE_PCFR, SLEEP_SAVE_PRER,
SLEEP_SAVE_PFER, SLEEP_SAVE_PKWR,
- SLEEP_SAVE_SIZE
+ SLEEP_SAVE_COUNT
};
void pxa27x_cpu_pm_save(unsigned long *sleep_save)
/* Clear edge-detect status register. */
PEDR = 0xDF12FE1B;
+ /* Clear reset status */
+ RCSR = RCSR_HWR | RCSR_WDR | RCSR_SMR | RCSR_GPR;
+
switch (state) {
case PM_SUSPEND_STANDBY:
pxa_cpu_standby();
}
static struct pxa_cpu_pm_fns pxa27x_cpu_pm_fns = {
- .save_size = SLEEP_SAVE_SIZE,
+ .save_count = SLEEP_SAVE_COUNT,
.save = pxa27x_cpu_pm_save,
.restore = pxa27x_cpu_pm_restore,
.valid = pxa27x_cpu_pm_valid,
#define SAVE(x) sleep_save[SLEEP_SAVE_##x] = x
#define RESTORE(x) x = sleep_save[SLEEP_SAVE_##x]
-enum { SLEEP_SAVE_START = 0,
- SLEEP_SAVE_CKENA,
+enum { SLEEP_SAVE_CKENA,
SLEEP_SAVE_CKENB,
SLEEP_SAVE_ACCR,
- SLEEP_SAVE_SIZE,
+ SLEEP_SAVE_COUNT,
};
static void pxa3xx_cpu_pm_save(unsigned long *sleep_save)
}
static struct pxa_cpu_pm_fns pxa3xx_cpu_pm_fns = {
- .save_size = SLEEP_SAVE_SIZE,
+ .save_count = SLEEP_SAVE_COUNT,
.save = pxa3xx_cpu_pm_save,
.restore = pxa3xx_cpu_pm_restore,
.valid = pxa3xx_cpu_pm_valid,
static void spitz_poweroff(void)
{
- RCSR = RCSR_HWR | RCSR_WDR | RCSR_SMR | RCSR_GPR;
-
pxa_gpio_mode(SPITZ_GPIO_ON_RESET | GPIO_OUT);
GPSR(SPITZ_GPIO_ON_RESET) = GPIO_bit(SPITZ_GPIO_ON_RESET);
/* nRESET_OUT Disable */
PSLR |= PSLR_SL_ROD;
- /* Clear reset status */
- RCSR = RCSR_HWR | RCSR_WDR | RCSR_SMR | RCSR_GPR;
-
/* Stop 3.6MHz and drive HIGH to PCMCIA and CS */
PCFR = PCFR_GPR_EN | PCFR_OPDE;
}
static void tosa_poweroff(void)
{
- RCSR = RCSR_HWR | RCSR_WDR | RCSR_SMR | RCSR_GPR;
-
pxa_gpio_mode(TOSA_GPIO_ON_RESET | GPIO_OUT);
GPSR(TOSA_GPIO_ON_RESET) = GPIO_bit(TOSA_GPIO_ON_RESET);
* More ones like CP and general purpose register values are preserved
* on the stack and then the stack pointer is stored last in sleep.S.
*/
-enum { SLEEP_SAVE_SP = 0,
-
- SLEEP_SAVE_GPDR, SLEEP_SAVE_GAFR,
+enum { SLEEP_SAVE_GPDR, SLEEP_SAVE_GAFR,
SLEEP_SAVE_PPDR, SLEEP_SAVE_PPSR, SLEEP_SAVE_PPAR, SLEEP_SAVE_PSDR,
SLEEP_SAVE_Ser1SDCR0,
- SLEEP_SAVE_SIZE
+ SLEEP_SAVE_COUNT
};
static int sa11x0_pm_enter(suspend_state_t state)
{
- unsigned long gpio, sleep_save[SLEEP_SAVE_SIZE];
+ unsigned long gpio, sleep_save[SLEEP_SAVE_COUNT];
gpio = GPLR;
clk->parent = parent;
- if (clk == &s3c24xx_dclk0)
+ if (clk == &s3c24xx_clkout0)
mask = S3C2410_MISCCR_CLK0_MASK;
else {
source <<= 4;
struct clk s3c24xx_dclk1 = {
.name = "dclk1",
.id = -1,
- .ctrlbit = S3C2410_DCLKCON_DCLK0EN,
+ .ctrlbit = S3C2410_DCLKCON_DCLK1EN,
.enable = s3c24xx_dclk_enable,
.set_parent = s3c24xx_dclk_setparent,
.set_rate = s3c24xx_set_dclk_rate,
config BANK_1
hex "Bank 1"
default 0x7BB0
+ default 0x5558 if BF54x
config BANK_2
hex "Bank 2"
endmenu
-if (BF537 || BF533 || BF54x)
-
menu "CPU Frequency scaling"
source "drivers/cpufreq/Kconfig"
-config CPU_FREQ
- bool
+config CPU_VOLTAGE
+ bool "CPU Voltage scaling"
+ depends on EXPERIMENTAL
+ depends on CPU_FREQ
default n
help
- If you want to enable this option, you should select the
- DPMC driver from Character Devices.
-endmenu
+ Say Y here if you want CPU voltage scaling according to the CPU frequency.
+ This option violates the PLL BYPASS recommendation in the Blackfin Processor
+ manuals. There is a theoretical risk that during VDDINT transitions
+ the PLL may unlock.
-endif
+endmenu
source "net/Kconfig"
/* offsets into the thread struct */
DEFINE(THREAD_KSP, offsetof(struct thread_struct, ksp));
DEFINE(THREAD_USP, offsetof(struct thread_struct, usp));
- DEFINE(THREAD_SR, offsetof(struct thread_struct, seqstat));
- DEFINE(PT_SR, offsetof(struct thread_struct, seqstat));
- DEFINE(THREAD_ESP0, offsetof(struct thread_struct, esp0));
DEFINE(THREAD_PC, offsetof(struct thread_struct, pc));
DEFINE(KERNEL_STACK_SIZE, THREAD_SIZE);
/*
* This file contains sequences of code that will be copied to a
- * fixed location, defined in <asm/atomic_seq.h>. The interrupt
+ * fixed location, defined in <asm/fixed_code.h>. The interrupt
* handlers ensure that these sequences appear to be atomic when
* executed from userspace.
* These are aligned to 16 bytes, so that we have some space to replace
module_frob_arch_sections(Elf_Ehdr * hdr, Elf_Shdr * sechdrs,
char *secstrings, struct module *mod)
{
+ /*
+ * XXX: sechdrs are vmalloced in kernel/module.c
+ * and would be vfreed just after module is loaded,
+ * so we hack to keep the only information we needed
+ * in mod->arch to correctly free L1 I/D sram later.
+ * NOTE: this breaks the semantic of mod->arch structure.
+ */
Elf_Shdr *s, *sechdrs_end = sechdrs + hdr->e_shnum;
void *dest = NULL;
if ((strcmp(".l1.text", secstrings + s->sh_name) == 0) ||
((strcmp(".text", secstrings + s->sh_name) == 0) &&
(hdr->e_flags & FLG_CODE_IN_L1) && (s->sh_size > 0))) {
- mod->arch.text_l1 = s;
dest = l1_inst_sram_alloc(s->sh_size);
+ mod->arch.text_l1 = dest;
if (dest == NULL) {
printk(KERN_ERR
"module %s: L1 instruction memory allocation failed\n",
if ((strcmp(".l1.data", secstrings + s->sh_name) == 0) ||
((strcmp(".data", secstrings + s->sh_name) == 0) &&
(hdr->e_flags & FLG_DATA_IN_L1) && (s->sh_size > 0))) {
- mod->arch.data_a_l1 = s;
dest = l1_data_sram_alloc(s->sh_size);
+ mod->arch.data_a_l1 = dest;
if (dest == NULL) {
printk(KERN_ERR
"module %s: L1 data memory allocation failed\n",
if (strcmp(".l1.bss", secstrings + s->sh_name) == 0 ||
((strcmp(".bss", secstrings + s->sh_name) == 0) &&
(hdr->e_flags & FLG_DATA_IN_L1) && (s->sh_size > 0))) {
- mod->arch.bss_a_l1 = s;
dest = l1_data_sram_alloc(s->sh_size);
+ mod->arch.bss_a_l1 = dest;
if (dest == NULL) {
printk(KERN_ERR
"module %s: L1 data memory allocation failed\n",
s->sh_addr = (unsigned long)dest;
}
if (strcmp(".l1.data.B", secstrings + s->sh_name) == 0) {
- mod->arch.data_b_l1 = s;
dest = l1_data_B_sram_alloc(s->sh_size);
+ mod->arch.data_b_l1 = dest;
if (dest == NULL) {
printk(KERN_ERR
"module %s: L1 data memory allocation failed\n",
s->sh_addr = (unsigned long)dest;
}
if (strcmp(".l1.bss.B", secstrings + s->sh_name) == 0) {
- mod->arch.bss_b_l1 = s;
dest = l1_data_B_sram_alloc(s->sh_size);
+ mod->arch.bss_b_l1 = dest;
if (dest == NULL) {
printk(KERN_ERR
"module %s: L1 data memory allocation failed\n",
void module_arch_cleanup(struct module *mod)
{
- if ((mod->arch.text_l1) && (mod->arch.text_l1->sh_addr))
- l1_inst_sram_free((void *)mod->arch.text_l1->sh_addr);
- if ((mod->arch.data_a_l1) && (mod->arch.data_a_l1->sh_addr))
- l1_data_sram_free((void *)mod->arch.data_a_l1->sh_addr);
- if ((mod->arch.bss_a_l1) && (mod->arch.bss_a_l1->sh_addr))
- l1_data_sram_free((void *)mod->arch.bss_a_l1->sh_addr);
- if ((mod->arch.data_b_l1) && (mod->arch.data_b_l1->sh_addr))
- l1_data_B_sram_free((void *)mod->arch.data_b_l1->sh_addr);
- if ((mod->arch.bss_b_l1) && (mod->arch.bss_b_l1->sh_addr))
- l1_data_B_sram_free((void *)mod->arch.bss_b_l1->sh_addr);
+ if (mod->arch.text_l1)
+ l1_inst_sram_free((void *)mod->arch.text_l1);
+ if (mod->arch.data_a_l1)
+ l1_data_sram_free((void *)mod->arch.data_a_l1);
+ if (mod->arch.bss_a_l1)
+ l1_data_sram_free((void *)mod->arch.bss_a_l1);
+ if (mod->arch.data_b_l1)
+ l1_data_B_sram_free((void *)mod->arch.data_b_l1);
+ if (mod->arch.bss_b_l1)
+ l1_data_B_sram_free((void *)mod->arch.bss_b_l1);
}
void finish_atomic_sections (struct pt_regs *regs)
{
- int __user *up0 = (int __user *)®s->p0;
+ int __user *up0 = (int __user *)regs->p0;
if (regs->pc < ATOMIC_SEQS_START || regs->pc >= ATOMIC_SEQS_END)
return;
{
unsigned long tmp;
/* make sure the single step bit is not set. */
- tmp = get_reg(child, PT_SR) & ~(TRACE_BITS << 16);
- put_reg(child, PT_SR, tmp);
+ tmp = get_reg(child, PT_SYSCFG) & ~TRACE_BITS;
+ put_reg(child, PT_SYSCFG, tmp);
}
long arch_ptrace(struct task_struct *child, long request, long addr, long data)
#define _BLOCKABLE (~(sigmask(SIGKILL) | sigmask(SIGSTOP)))
+/* Location of the trace bit in SYSCFG. */
+#define TRACE_BITS 0x0001
+
struct fdpic_func_descriptor {
unsigned long text;
unsigned long GOT;
regs->r1 = (unsigned long)(&frame->info);
regs->r2 = (unsigned long)(&frame->uc);
+ /*
+ * Clear the trace flag when entering the signal handler, but
+ * notify any tracer that was single-stepping it. The tracer
+ * may want to single-step inside the handler too.
+ */
+ if (regs->syscfg & TRACE_BITS) {
+ regs->syscfg &= ~TRACE_BITS;
+ ptrace_notify(SIGTRAP);
+ }
+
return 0;
give_sigsegv:
static cycle_t read_cycles(void)
{
- return get_cycles();
+ return __bfin_cycles_off + (get_cycles() << __bfin_cycles_mod);
}
unsigned long long sched_clock(void)
break;
}
case CLOCK_EVT_MODE_ONESHOT:
- bfin_write_TSCALE(0);
+ bfin_write_TSCALE(TIME_SCALE - 1);
bfin_write_TCOUNT(0);
bfin_write_TCNTL(TMPWR | TMREN);
CSYNC();
static int __init bfin_clockevent_init(void)
{
+ unsigned long timer_clk;
+
+ timer_clk = get_cclk() / TIME_SCALE;
+
setup_irq(IRQ_CORETMR, &bfin_timer_irq);
bfin_timer_init();
- clockevent_bfin.mult = div_sc(get_cclk(), NSEC_PER_SEC, clockevent_bfin.shift);
+ clockevent_bfin.mult = div_sc(timer_clk, NSEC_PER_SEC, clockevent_bfin.shift);
clockevent_bfin.max_delta_ns = clockevent_delta2ns(-1, &clockevent_bfin);
clockevent_bfin.min_delta_ns = clockevent_delta2ns(100, &clockevent_bfin);
clockevents_register_device(&clockevent_bfin);
#include <linux/platform_device.h>
#include <linux/mtd/mtd.h>
#include <linux/mtd/partitions.h>
+#include <linux/mtd/physmap.h>
#include <linux/spi/spi.h>
#include <linux/spi/flash.h>
#if defined(CONFIG_USB_ISP1362_HCD) || defined(CONFIG_USB_ISP1362_HCD_MODULE)
#include <linux/usb/isp1362.h>
#endif
#include <linux/ata_platform.h>
+#include <linux/i2c.h>
#include <linux/irq.h>
#include <linux/interrupt.h>
#include <linux/usb/sl811.h>
#include <asm/reboot.h>
#include <asm/nand.h>
#include <asm/portmux.h>
+#include <asm/dpmc.h>
#include <linux/spi/ad7877.h>
/*
};
#endif
+#if defined(CONFIG_MTD_PHYSMAP) || defined(CONFIG_MTD_PHYSMAP_MODULE)
+static struct mtd_partition ezkit_partitions[] = {
+ {
+ .name = "Bootloader",
+ .size = 0x40000,
+ .offset = 0,
+ }, {
+ .name = "Kernel",
+ .size = 0x1C0000,
+ .offset = MTDPART_OFS_APPEND,
+ }, {
+ .name = "RootFS",
+ .size = MTDPART_SIZ_FULL,
+ .offset = MTDPART_OFS_APPEND,
+ }
+};
+
+static struct physmap_flash_data ezkit_flash_data = {
+ .width = 2,
+ .parts = ezkit_partitions,
+ .nr_parts = ARRAY_SIZE(ezkit_partitions),
+};
+
+static struct resource ezkit_flash_resource = {
+ .start = 0x20000000,
+ .end = 0x203fffff,
+ .flags = IORESOURCE_MEM,
+};
+
+static struct platform_device ezkit_flash_device = {
+ .name = "physmap-flash",
+ .id = 0,
+ .dev = {
+ .platform_data = &ezkit_flash_data,
+ },
+ .num_resources = 1,
+ .resource = &ezkit_flash_resource,
+};
+#endif
+
#if defined(CONFIG_MTD_NAND_BF5XX) || defined(CONFIG_MTD_NAND_BF5XX_MODULE)
static struct mtd_partition partition_info[] = {
{
.offset = 0,
.mask_flags = MTD_CAP_ROM
}, {
- .name = "kernel",
- .size = 0xe0000,
- .offset = MTDPART_OFS_APPEND,
- }, {
- .name = "file system",
+ .name = "linux kernel",
.size = MTDPART_SIZ_FULL,
.offset = MTDPART_OFS_APPEND,
}
.name = "m25p80",
.parts = bfin_spi_flash_partitions,
.nr_parts = ARRAY_SIZE(bfin_spi_flash_partitions),
- .type = "m25p64",
+ .type = "m25p16",
};
/* SPI flash chip (m25p64) */
};
#endif
+#ifdef CONFIG_I2C_BOARDINFO
+static struct i2c_board_info __initdata bfin_i2c_board_info[] = {
+#if defined(CONFIG_TWI_LCD) || defined(CONFIG_TWI_LCD_MODULE)
+ {
+ I2C_BOARD_INFO("pcf8574_lcd", 0x22),
+ .type = "pcf8574_lcd",
+ },
+#endif
+#if defined(CONFIG_TWI_KEYPAD) || defined(CONFIG_TWI_KEYPAD_MODULE)
+ {
+ I2C_BOARD_INFO("pcf8574_keypad", 0x27),
+ .type = "pcf8574_keypad",
+ .irq = IRQ_PF8,
+ },
+#endif
+};
+#endif
+
#if defined(CONFIG_SERIAL_BFIN_SPORT) || defined(CONFIG_SERIAL_BFIN_SPORT_MODULE)
static struct platform_device bfin_sport0_uart_device = {
.name = "bfin-sport-uart",
.resource = &bfin_gpios_resources,
};
+static const unsigned int cclk_vlev_datasheet[] =
+{
+ VRPAIR(VLEV_100, 400000000),
+ VRPAIR(VLEV_105, 426000000),
+ VRPAIR(VLEV_110, 500000000),
+ VRPAIR(VLEV_115, 533000000),
+ VRPAIR(VLEV_120, 600000000),
+};
+
+static struct bfin_dpmc_platform_data bfin_dmpc_vreg_data = {
+ .tuple_tab = cclk_vlev_datasheet,
+ .tabsize = ARRAY_SIZE(cclk_vlev_datasheet),
+ .vr_settling_time = 25 /* us */,
+};
+
+static struct platform_device bfin_dpmc = {
+ .name = "bfin dpmc",
+ .dev = {
+ .platform_data = &bfin_dmpc_vreg_data,
+ },
+};
+
static struct platform_device *stamp_devices[] __initdata = {
+
+ &bfin_dpmc,
+
#if defined(CONFIG_MTD_NAND_BF5XX) || defined(CONFIG_MTD_NAND_BF5XX_MODULE)
&bf5xx_nand_device,
#endif
&bfin_device_gpiokeys,
#endif
+#if defined(CONFIG_MTD_PHYSMAP) || defined(CONFIG_MTD_PHYSMAP_MODULE)
+ &ezkit_flash_device,
+#endif
+
&bfin_gpios_device,
};
static int __init stamp_init(void)
{
printk(KERN_INFO "%s(): registering device resources\n", __func__);
+
+#ifdef CONFIG_I2C_BOARDINFO
+ i2c_register_board_info(0, bfin_i2c_board_info,
+ ARRAY_SIZE(bfin_i2c_board_info));
+#endif
+
platform_add_devices(stamp_devices, ARRAY_SIZE(stamp_devices));
#if defined(CONFIG_SPI_BFIN) || defined(CONFIG_SPI_BFIN_MODULE)
spi_register_board_info(bfin_spi_board_info,
#include <linux/mtd/partitions.h>
#include <linux/spi/spi.h>
#include <linux/spi/flash.h>
+#if defined(CONFIG_USB_ISP1362_HCD) || defined(CONFIG_USB_ISP1362_HCD_MODULE)
#include <linux/usb/isp1362.h>
+#endif
#include <linux/ata_platform.h>
#include <linux/irq.h>
#include <asm/dma.h>
#include <asm/bfin5xx_spi.h>
#include <asm/portmux.h>
+#include <asm/dpmc.h>
/*
* Name the Board for the /proc/cpuinfo
};
#endif
+static const unsigned int cclk_vlev_datasheet[] =
+{
+ VRPAIR(VLEV_085, 250000000),
+ VRPAIR(VLEV_090, 376000000),
+ VRPAIR(VLEV_095, 426000000),
+ VRPAIR(VLEV_100, 426000000),
+ VRPAIR(VLEV_105, 476000000),
+ VRPAIR(VLEV_110, 476000000),
+ VRPAIR(VLEV_115, 476000000),
+ VRPAIR(VLEV_120, 600000000),
+ VRPAIR(VLEV_125, 600000000),
+ VRPAIR(VLEV_130, 600000000),
+};
+
+static struct bfin_dpmc_platform_data bfin_dmpc_vreg_data = {
+ .tuple_tab = cclk_vlev_datasheet,
+ .tabsize = ARRAY_SIZE(cclk_vlev_datasheet),
+ .vr_settling_time = 25 /* us */,
+};
+
+static struct platform_device bfin_dpmc = {
+ .name = "bfin dpmc",
+ .dev = {
+ .platform_data = &bfin_dmpc_vreg_data,
+ },
+};
+
static struct platform_device *cm_bf533_devices[] __initdata = {
+
+ &bfin_dpmc,
+
#if defined(CONFIG_SERIAL_BFIN) || defined(CONFIG_SERIAL_BFIN_MODULE)
&bfin_uart_device,
#endif
#include <asm/dma.h>
#include <asm/bfin5xx_spi.h>
#include <asm/portmux.h>
+#include <asm/dpmc.h>
/*
* Name the Board for the /proc/cpuinfo
};
#endif
+static const unsigned int cclk_vlev_datasheet[] =
+{
+ VRPAIR(VLEV_085, 250000000),
+ VRPAIR(VLEV_090, 376000000),
+ VRPAIR(VLEV_095, 426000000),
+ VRPAIR(VLEV_100, 426000000),
+ VRPAIR(VLEV_105, 476000000),
+ VRPAIR(VLEV_110, 476000000),
+ VRPAIR(VLEV_115, 476000000),
+ VRPAIR(VLEV_120, 600000000),
+ VRPAIR(VLEV_125, 600000000),
+ VRPAIR(VLEV_130, 600000000),
+};
+
+static struct bfin_dpmc_platform_data bfin_dmpc_vreg_data = {
+ .tuple_tab = cclk_vlev_datasheet,
+ .tabsize = ARRAY_SIZE(cclk_vlev_datasheet),
+ .vr_settling_time = 25 /* us */,
+};
+
+static struct platform_device bfin_dpmc = {
+ .name = "bfin dpmc",
+ .dev = {
+ .platform_data = &bfin_dmpc_vreg_data,
+ },
+};
+
static struct platform_device *ezkit_devices[] __initdata = {
+
+ &bfin_dpmc,
+
#if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE)
&smc91x_device,
#endif
#include <asm/bfin5xx_spi.h>
#include <asm/reboot.h>
#include <asm/portmux.h>
+#include <asm/dpmc.h>
/*
* Name the Board for the /proc/cpuinfo
};
#endif
+static const unsigned int cclk_vlev_datasheet[] =
+{
+ VRPAIR(VLEV_085, 250000000),
+ VRPAIR(VLEV_090, 376000000),
+ VRPAIR(VLEV_095, 426000000),
+ VRPAIR(VLEV_100, 426000000),
+ VRPAIR(VLEV_105, 476000000),
+ VRPAIR(VLEV_110, 476000000),
+ VRPAIR(VLEV_115, 476000000),
+ VRPAIR(VLEV_120, 600000000),
+ VRPAIR(VLEV_125, 600000000),
+ VRPAIR(VLEV_130, 600000000),
+};
+
+static struct bfin_dpmc_platform_data bfin_dmpc_vreg_data = {
+ .tuple_tab = cclk_vlev_datasheet,
+ .tabsize = ARRAY_SIZE(cclk_vlev_datasheet),
+ .vr_settling_time = 25 /* us */,
+};
+
+static struct platform_device bfin_dpmc = {
+ .name = "bfin dpmc",
+ .dev = {
+ .platform_data = &bfin_dmpc_vreg_data,
+ },
+};
+
static struct platform_device *stamp_devices[] __initdata = {
+
+ &bfin_dpmc,
+
#if defined(CONFIG_RTC_DRV_BFIN) || defined(CONFIG_RTC_DRV_BFIN_MODULE)
&rtc_device,
#endif
#include <linux/mtd/partitions.h>
#include <linux/spi/spi.h>
#include <linux/spi/flash.h>
+#if defined(CONFIG_USB_ISP1362_HCD) || defined(CONFIG_USB_ISP1362_HCD_MODULE)
#include <linux/usb/isp1362.h>
+#endif
#include <linux/ata_platform.h>
#include <linux/irq.h>
#include <asm/dma.h>
#include <asm/bfin5xx_spi.h>
#include <asm/portmux.h>
+#include <asm/dpmc.h>
/*
* Name the Board for the /proc/cpuinfo
};
#endif
+static const unsigned int cclk_vlev_datasheet[] =
+{
+ VRPAIR(VLEV_085, 250000000),
+ VRPAIR(VLEV_090, 376000000),
+ VRPAIR(VLEV_095, 426000000),
+ VRPAIR(VLEV_100, 426000000),
+ VRPAIR(VLEV_105, 476000000),
+ VRPAIR(VLEV_110, 476000000),
+ VRPAIR(VLEV_115, 476000000),
+ VRPAIR(VLEV_120, 500000000),
+ VRPAIR(VLEV_125, 533000000),
+ VRPAIR(VLEV_130, 600000000),
+};
+
+static struct bfin_dpmc_platform_data bfin_dmpc_vreg_data = {
+ .tuple_tab = cclk_vlev_datasheet,
+ .tabsize = ARRAY_SIZE(cclk_vlev_datasheet),
+ .vr_settling_time = 25 /* us */,
+};
+
+static struct platform_device bfin_dpmc = {
+ .name = "bfin dpmc",
+ .dev = {
+ .platform_data = &bfin_dmpc_vreg_data,
+ },
+};
+
static struct platform_device *cm_bf537_devices[] __initdata = {
+
+ &bfin_dpmc,
+
#if defined(CONFIG_FB_HITACHI_TX09) || defined(CONFIG_FB_HITACHI_TX09_MODULE)
&hitachi_fb_device,
#endif
#include <asm/bfin5xx_spi.h>
#include <asm/reboot.h>
#include <asm/portmux.h>
+#include <asm/dpmc.h>
#include <linux/spi/ad7877.h>
/*
};
#endif
+static const unsigned int cclk_vlev_datasheet[] =
+{
+ VRPAIR(VLEV_085, 250000000),
+ VRPAIR(VLEV_090, 376000000),
+ VRPAIR(VLEV_095, 426000000),
+ VRPAIR(VLEV_100, 426000000),
+ VRPAIR(VLEV_105, 476000000),
+ VRPAIR(VLEV_110, 476000000),
+ VRPAIR(VLEV_115, 476000000),
+ VRPAIR(VLEV_120, 500000000),
+ VRPAIR(VLEV_125, 533000000),
+ VRPAIR(VLEV_130, 600000000),
+};
+
+static struct bfin_dpmc_platform_data bfin_dmpc_vreg_data = {
+ .tuple_tab = cclk_vlev_datasheet,
+ .tabsize = ARRAY_SIZE(cclk_vlev_datasheet),
+ .vr_settling_time = 25 /* us */,
+};
+
+static struct platform_device bfin_dpmc = {
+ .name = "bfin dpmc",
+ .dev = {
+ .platform_data = &bfin_dmpc_vreg_data,
+ },
+};
+
static struct platform_device *stamp_devices[] __initdata = {
+
+ &bfin_dpmc,
+
#if defined(CONFIG_BFIN_CFPCMCIA) || defined(CONFIG_BFIN_CFPCMCIA_MODULE)
&bfin_pcmcia_cf_device,
#endif
#include <linux/spi/flash.h>
#include <linux/irq.h>
#include <linux/interrupt.h>
+#if defined(CONFIG_USB_MUSB_HDRC) || defined(CONFIG_USB_MUSB_HDRC_MODULE)
#include <linux/usb/musb.h>
+#endif
#include <asm/bfin5xx_spi.h>
#include <asm/cplb.h>
#include <asm/dma.h>
#include <asm/nand.h>
#include <asm/portmux.h>
#include <asm/mach/bf54x_keys.h>
+#include <asm/dpmc.h>
#include <linux/input.h>
#include <linux/spi/ad7877.h>
};
#endif
+static const unsigned int cclk_vlev_datasheet[] =
+{
+/*
+ * Internal VLEV BF54XSBBC1533
+ ****temporarily using these values until data sheet is updated
+ */
+ VRPAIR(VLEV_085, 150000000),
+ VRPAIR(VLEV_090, 250000000),
+ VRPAIR(VLEV_110, 276000000),
+ VRPAIR(VLEV_115, 301000000),
+ VRPAIR(VLEV_120, 525000000),
+ VRPAIR(VLEV_125, 550000000),
+ VRPAIR(VLEV_130, 600000000),
+};
+
+static struct bfin_dpmc_platform_data bfin_dmpc_vreg_data = {
+ .tuple_tab = cclk_vlev_datasheet,
+ .tabsize = ARRAY_SIZE(cclk_vlev_datasheet),
+ .vr_settling_time = 25 /* us */,
+};
+
+static struct platform_device bfin_dpmc = {
+ .name = "bfin dpmc",
+ .dev = {
+ .platform_data = &bfin_dmpc_vreg_data,
+ },
+};
+
static struct platform_device *cm_bf548_devices[] __initdata = {
+
+ &bfin_dpmc,
+
#if defined(CONFIG_RTC_DRV_BFIN) || defined(CONFIG_RTC_DRV_BFIN_MODULE)
&rtc_device,
#endif
#include <asm/dma.h>
#include <asm/gpio.h>
#include <asm/nand.h>
+#include <asm/dpmc.h>
#include <asm/portmux.h>
#include <asm/mach/bf54x_keys.h>
#include <linux/input.h>
.resource = &bfin_gpios_resources,
};
+static const unsigned int cclk_vlev_datasheet[] =
+{
+/*
+ * Internal VLEV BF54XSBBC1533
+ ****temporarily using these values until data sheet is updated
+ */
+ VRPAIR(VLEV_085, 150000000),
+ VRPAIR(VLEV_090, 250000000),
+ VRPAIR(VLEV_110, 276000000),
+ VRPAIR(VLEV_115, 301000000),
+ VRPAIR(VLEV_120, 525000000),
+ VRPAIR(VLEV_125, 550000000),
+ VRPAIR(VLEV_130, 600000000),
+};
+
+static struct bfin_dpmc_platform_data bfin_dmpc_vreg_data = {
+ .tuple_tab = cclk_vlev_datasheet,
+ .tabsize = ARRAY_SIZE(cclk_vlev_datasheet),
+ .vr_settling_time = 25 /* us */,
+};
+
+static struct platform_device bfin_dpmc = {
+ .name = "bfin dpmc",
+ .dev = {
+ .platform_data = &bfin_dmpc_vreg_data,
+ },
+};
+
static struct platform_device *ezkit_devices[] __initdata = {
+
+ &bfin_dpmc,
+
#if defined(CONFIG_RTC_DRV_BFIN) || defined(CONFIG_RTC_DRV_BFIN_MODULE)
&rtc_device,
#endif
#include <linux/mtd/partitions.h>
#include <linux/spi/spi.h>
#include <linux/spi/flash.h>
+#if defined(CONFIG_USB_ISP1362_HCD) || defined(CONFIG_USB_ISP1362_HCD_MODULE)
#include <linux/usb/isp1362.h>
+#endif
#include <linux/ata_platform.h>
#include <linux/irq.h>
#include <asm/dma.h>
#include <asm/bfin5xx_spi.h>
#include <asm/portmux.h>
+#include <asm/dpmc.h>
/*
* Name the Board for the /proc/cpuinfo
};
#endif
+static const unsigned int cclk_vlev_datasheet[] =
+{
+ VRPAIR(VLEV_085, 250000000),
+ VRPAIR(VLEV_090, 300000000),
+ VRPAIR(VLEV_095, 313000000),
+ VRPAIR(VLEV_100, 350000000),
+ VRPAIR(VLEV_105, 400000000),
+ VRPAIR(VLEV_110, 444000000),
+ VRPAIR(VLEV_115, 450000000),
+ VRPAIR(VLEV_120, 475000000),
+ VRPAIR(VLEV_125, 500000000),
+ VRPAIR(VLEV_130, 600000000),
+};
+
+static struct bfin_dpmc_platform_data bfin_dmpc_vreg_data = {
+ .tuple_tab = cclk_vlev_datasheet,
+ .tabsize = ARRAY_SIZE(cclk_vlev_datasheet),
+ .vr_settling_time = 25 /* us */,
+};
+
+static struct platform_device bfin_dpmc = {
+ .name = "bfin dpmc",
+ .dev = {
+ .platform_data = &bfin_dmpc_vreg_data,
+ },
+};
+
static struct platform_device *cm_bf561_devices[] __initdata = {
+ &bfin_dpmc,
+
#if defined(CONFIG_FB_HITACHI_TX09) || defined(CONFIG_FB_HITACHI_TX09_MODULE)
&hitachi_fb_device,
#endif
#include <asm/dma.h>
#include <asm/bfin5xx_spi.h>
#include <asm/portmux.h>
+#include <asm/dpmc.h>
/*
* Name the Board for the /proc/cpuinfo
};
#endif
+static const unsigned int cclk_vlev_datasheet[] =
+{
+ VRPAIR(VLEV_085, 250000000),
+ VRPAIR(VLEV_090, 300000000),
+ VRPAIR(VLEV_095, 313000000),
+ VRPAIR(VLEV_100, 350000000),
+ VRPAIR(VLEV_105, 400000000),
+ VRPAIR(VLEV_110, 444000000),
+ VRPAIR(VLEV_115, 450000000),
+ VRPAIR(VLEV_120, 475000000),
+ VRPAIR(VLEV_125, 500000000),
+ VRPAIR(VLEV_130, 600000000),
+};
+
+static struct bfin_dpmc_platform_data bfin_dmpc_vreg_data = {
+ .tuple_tab = cclk_vlev_datasheet,
+ .tabsize = ARRAY_SIZE(cclk_vlev_datasheet),
+ .vr_settling_time = 25 /* us */,
+};
+
+static struct platform_device bfin_dpmc = {
+ .name = "bfin dpmc",
+ .dev = {
+ .platform_data = &bfin_dmpc_vreg_data,
+ },
+};
+
static struct platform_device *ezkit_devices[] __initdata = {
+
+ &bfin_dpmc,
+
#if defined(CONFIG_SMC91X) || defined(CONFIG_SMC91X_MODULE)
&smc91x_device,
#endif
cache.o cacheinit.o entry.o \
interrupt.o lock.o irqpanic.o arch_checks.o ints-priority.o
-obj-$(CONFIG_PM) += pm.o dpmc.o
-obj-$(CONFIG_CPU_FREQ) += cpufreq.o
+obj-$(CONFIG_PM) += pm.o dpmc_modes.o
+obj-$(CONFIG_CPU_FREQ) += cpufreq.o
+obj-$(CONFIG_CPU_VOLTAGE) += dpmc.o
unsigned int tscale; /* change the divider on the core timer interrupt */
} dpm_state_table[3];
+/*
+ normalized to maximum frequncy offset for CYCLES,
+ used in time-ts cycles clock source, but could be used
+ somewhere also.
+ */
+unsigned long long __bfin_cycles_off;
+unsigned int __bfin_cycles_mod;
+
/**************************************************************************/
static unsigned int bfin_getfreq(unsigned int cpu)
unsigned int index, plldiv, tscale;
unsigned long flags, cclk_hz;
struct cpufreq_freqs freqs;
+ cycles_t cycles;
if (cpufreq_frequency_table_target(policy, bfin_freq_table,
target_freq, relation, &index))
bfin_write_PLL_DIV(plldiv);
/* we have to adjust the core timer, because it is using cclk */
bfin_write_TSCALE(tscale);
+ cycles = get_cycles();
SSYNC();
+ cycles += 10; /* ~10 cycles we loose after get_cycles() */
+ __bfin_cycles_off += (cycles << __bfin_cycles_mod) - (cycles << index);
+ __bfin_cycles_mod = index;
local_irq_restore(flags);
+ /* TODO: just test case for cycles clock source, remove later */
+ pr_debug("cpufreq: done\n");
cpufreq_notify_transition(&freqs, CPUFREQ_POSTCHANGE);
return 0;
unsigned long cclk, sclk, csel, min_cclk;
int index;
-#ifdef CONFIG_CYCLES_CLOCKSOURCE
-/*
- * Clocksource CYCLES is still CONTINUOUS but not longer MONOTONIC in case we enable
- * CPU frequency scaling, since CYCLES runs off Core Clock.
- */
- printk(KERN_WARNING "CPU frequency scaling not supported: Clocksource not suitable\n"
- return -ENODEV;
-#endif
-
if (policy->cpu != 0)
return -EINVAL;
cclk = get_cclk();
sclk = get_sclk();
-#if ANOMALY_05000273
+#if ANOMALY_05000273 || (!defined(CONFIG_BF54x) && defined(CONFIG_BFIN_DCACHE))
min_cclk = sclk * 2;
#else
min_cclk = sclk;
--- /dev/null
+/*
+ * Copyright 2008 Analog Devices Inc.
+ *
+ * Licensed under the GPL-2 or later.
+ */
+
+#include <linux/cdev.h>
+#include <linux/device.h>
+#include <linux/errno.h>
+#include <linux/fs.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/platform_device.h>
+#include <linux/types.h>
+#include <linux/cpufreq.h>
+
+#include <asm/delay.h>
+#include <asm/dpmc.h>
+
+#define DRIVER_NAME "bfin dpmc"
+
+#define dprintk(msg...) \
+ cpufreq_debug_printk(CPUFREQ_DEBUG_DRIVER, DRIVER_NAME, msg)
+
+struct bfin_dpmc_platform_data *pdata;
+
+/**
+ * bfin_set_vlev - Update VLEV field in VR_CTL Reg.
+ * Avoid BYPASS sequence
+ */
+static void bfin_set_vlev(unsigned int vlev)
+{
+ unsigned pll_lcnt;
+
+ pll_lcnt = bfin_read_PLL_LOCKCNT();
+
+ bfin_write_PLL_LOCKCNT(1);
+ bfin_write_VR_CTL((bfin_read_VR_CTL() & ~VLEV) | vlev);
+ bfin_write_PLL_LOCKCNT(pll_lcnt);
+}
+
+/**
+ * bfin_get_vlev - Get CPU specific VLEV from platform device data
+ */
+static unsigned int bfin_get_vlev(unsigned int freq)
+{
+ int i;
+
+ if (!pdata)
+ goto err_out;
+
+ freq >>= 16;
+
+ for (i = 0; i < pdata->tabsize; i++)
+ if (freq <= (pdata->tuple_tab[i] & 0xFFFF))
+ return pdata->tuple_tab[i] >> 16;
+
+err_out:
+ printk(KERN_WARNING "DPMC: No suitable CCLK VDDINT voltage pair found\n");
+ return VLEV_120;
+}
+
+#ifdef CONFIG_CPU_FREQ
+static int
+vreg_cpufreq_notifier(struct notifier_block *nb, unsigned long val, void *data)
+{
+ struct cpufreq_freqs *freq = data;
+
+ if (val == CPUFREQ_PRECHANGE && freq->old < freq->new) {
+ bfin_set_vlev(bfin_get_vlev(freq->new));
+ udelay(pdata->vr_settling_time); /* Wait until Volatge settled */
+
+ } else if (val == CPUFREQ_POSTCHANGE && freq->old > freq->new)
+ bfin_set_vlev(bfin_get_vlev(freq->new));
+
+ return 0;
+}
+
+static struct notifier_block vreg_cpufreq_notifier_block = {
+ .notifier_call = vreg_cpufreq_notifier
+};
+#endif /* CONFIG_CPU_FREQ */
+
+/**
+ * bfin_dpmc_probe -
+ *
+ */
+static int __devinit bfin_dpmc_probe(struct platform_device *pdev)
+{
+ if (pdev->dev.platform_data)
+ pdata = pdev->dev.platform_data;
+ else
+ return -EINVAL;
+
+ return cpufreq_register_notifier(&vreg_cpufreq_notifier_block,
+ CPUFREQ_TRANSITION_NOTIFIER);
+}
+
+/**
+ * bfin_dpmc_remove -
+ */
+static int __devexit bfin_dpmc_remove(struct platform_device *pdev)
+{
+ pdata = NULL;
+ return cpufreq_unregister_notifier(&vreg_cpufreq_notifier_block,
+ CPUFREQ_TRANSITION_NOTIFIER);
+}
+
+struct platform_driver bfin_dpmc_device_driver = {
+ .probe = bfin_dpmc_probe,
+ .remove = __devexit_p(bfin_dpmc_remove),
+ .driver = {
+ .name = DRIVER_NAME,
+ }
+};
+
+/**
+ * bfin_dpmc_init - Init driver
+ */
+static int __init bfin_dpmc_init(void)
+{
+ return platform_driver_register(&bfin_dpmc_device_driver);
+}
+module_init(bfin_dpmc_init);
+
+/**
+ * bfin_dpmc_exit - break down driver
+ */
+static void __exit bfin_dpmc_exit(void)
+{
+ platform_driver_unregister(&bfin_dpmc_device_driver);
+}
+module_exit(bfin_dpmc_exit);
+
+MODULE_AUTHOR("Michael Hennerich <hennerich@blackfin.uclinux.org>");
+MODULE_DESCRIPTION("cpu power management driver for Blackfin");
+MODULE_LICENSE("GPL");
/*
- * File: arch/blackfin/mach-common/dpmc.S
- * Based on:
- * Author: LG Soft India
+ * Copyright 2004-2008 Analog Devices Inc.
*
- * Created: ?
- * Description: Watchdog Timer APIs
- *
- * Modified:
- * Copyright 2004-2006 Analog Devices Inc.
- *
- * Bugs: Enter bugs at http://blackfin.uclinux.org/
- *
- * This program is free software; you can redistribute it and/or modify
- * it under the terms of the GNU General Public License as published by
- * the Free Software Foundation; either version 2 of the License, or
- * (at your option) any later version.
- *
- * This program is distributed in the hope that it will be useful,
- * but WITHOUT ANY WARRANTY; without even the implied warranty of
- * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
- * GNU General Public License for more details.
- *
- * You should have received a copy of the GNU General Public License
- * along with this program; if not, see the file COPYING, or write
- * to the Free Software Foundation, Inc.,
- * 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+ * Licensed under the GPL-2 or later.
*/
#include <linux/linkage.h>
ENDPROC(_ex_soft_bp)
ENTRY(_ex_single_step)
+ /* If we just returned from an interrupt, the single step event is
+ for the RTI instruction. */
r7 = retx;
r6 = reti;
cc = r7 == r6;
- if cc jump _bfin_return_from_exception
- r7 = syscfg;
- bitclr (r7, 0);
- syscfg = R7;
+ if cc jump _bfin_return_from_exception;
+ /* If we were in user mode, do the single step normally. */
p5.l = lo(IPEND);
p5.h = hi(IPEND);
r6 = [p5];
- cc = bittst(r6, 5);
- if !cc jump _ex_trap_c;
- p4.l = lo(EVT5);
- p4.h = hi(EVT5);
- r6.h = _exception_to_level5;
- r6.l = _exception_to_level5;
- r7 = [p4];
- cc = r6 == r7;
- if !cc jump _ex_trap_c;
+ r7 = 0xffe0 (z);
+ r7 = r7 & r6;
+ cc = r7 == 0;
+ if !cc jump 1f;
+
+ /* Single stepping only a single instruction, so clear the trace
+ * bit here. */
+ r7 = syscfg;
+ bitclr (r7, 0);
+ syscfg = R7;
+ jump _ex_trap_c;
+
+1:
+ /*
+ * We were in an interrupt handler. By convention, all of them save
+ * SYSCFG with their first instruction, so by checking whether our
+ * RETX points at the entry point, we can determine whether to allow
+ * a single step, or whether to clear SYSCFG.
+ *
+ * First, find out the interrupt level and the event vector for it.
+ */
+ p5.l = lo(EVT0);
+ p5.h = hi(EVT0);
+ p5 += -4;
+2:
+ r7 = rot r7 by -1;
+ p5 += 4;
+ if !cc jump 2b;
+
+ /* What we actually do is test for the _second_ instruction in the
+ * IRQ handler. That way, if there are insns following the restore
+ * of SYSCFG after leaving the handler, we will not turn off SYSCFG
+ * for them. */
+
+ r7 = [p5];
+ r7 += 2;
+ r6 = RETX;
+ cc = R7 == R6;
+ if !cc jump _bfin_return_from_exception;
+
+ r7 = syscfg;
+ bitclr (r7, 0);
+ syscfg = R7;
+
+ /* Fall through to _bfin_return_from_exception. */
ENDPROC(_ex_single_step)
ENTRY(_bfin_return_from_exception)
p5.l = _saved_icplb_fault_addr;
[p5] = r7;
- p4.l = __retx;
- p4.h = __retx;
+ p4.l = _excpt_saved_stuff;
+ p4.h = _excpt_saved_stuff;
+
r6 = retx;
[p4] = r6;
- p4.l = lo(SAFE_USER_INSTRUCTION);
- p4.h = hi(SAFE_USER_INSTRUCTION);
- retx = p4;
+
+ r6 = SYSCFG;
+ [p4 + 4] = r6;
+ BITCLR(r6, 0);
+ SYSCFG = r6;
/* Disable all interrupts, but make sure level 5 is enabled so
* we can switch to that level. Save the old mask. */
cli r6;
- p4.l = _excpt_saved_imask;
- p4.h = _excpt_saved_imask;
- [p4] = r6;
+ [p4 + 8] = r6;
+
+ p4.l = lo(SAFE_USER_INSTRUCTION);
+ p4.h = hi(SAFE_USER_INSTRUCTION);
+ retx = p4;
+
r6 = 0x3f;
sti r6;
*/
SAVE_ALL_SYS
+ /* The dumping functions expect the return address in the RETI
+ * slot. */
+ r6 = retx;
+ [sp + PT_PC] = r6;
+
r0 = sp; /* stack frame pt_regs pointer argument ==> r0 */
SP += -12;
call _double_fault_c;
ENTRY(_exception_to_level5)
SAVE_ALL_SYS
- p4.l = __retx;
- p4.h = __retx;
+ p4.l = _excpt_saved_stuff;
+ p4.h = _excpt_saved_stuff;
r6 = [p4];
[sp + PT_PC] = r6;
+ r6 = [p4 + 4];
+ [sp + PT_SYSCFG] = r6;
+
/* Restore interrupt mask. We haven't pushed RETI, so this
* doesn't enable interrupts until we return from this handler. */
- p4.l = _excpt_saved_imask;
- p4.h = _excpt_saved_imask;
- r6 = [p4];
+ r6 = [p4 + 8];
sti r6;
/* Restore the hardware error vector. */
.rept NR_syscalls-(.-_sys_call_table)/4
.long _sys_ni_syscall
.endr
-_excpt_saved_imask:
+
+ /*
+ * Used to save the real RETX, IMASK and SYSCFG when temporarily
+ * storing safe values across the transition from exception to IRQ5.
+ */
+_excpt_saved_stuff:
+ .long 0;
+ .long 0;
.long 0;
_exception_stack:
_last_cplb_fault_retx:
.long 0;
#endif
- /* Used to save the real RETX when temporarily storing a safe
- * return address. */
-__retx:
- .long 0;
#include <asm/uaccess.h>
#include <asm/segment.h>
-/*
- * sys_pipe() is the normal C calling standard for creating
- * a pipe. It's not the way Unix traditionally does this, though.
- */
-asmlinkage int sys_pipe(unsigned long __user * fildes)
-{
- int fd[2];
- int error;
-
- lock_kernel();
- error = do_pipe(fd);
- unlock_kernel();
- if (!error) {
- if (copy_to_user(fildes, fd, 2*sizeof(int)))
- error = -EFAULT;
- }
- return error;
-}
-
/* common code for old and new mmaps */
static inline long
do_mmap2(unsigned long addr, unsigned long len, unsigned long prot,
return oldval;
}
-/*
- * sys_pipe() is the normal C calling standard for creating
- * a pipe. It's not the way Unix traditionally does this, though.
- */
-asmlinkage int
-sys_pipe(unsigned long r0, unsigned long r1, unsigned long r2,
- unsigned long r3, unsigned long r4, unsigned long r5,
- unsigned long r6, struct pt_regs regs)
-{
- int fd[2];
- int error;
-
- error = do_pipe(fd);
- if (!error) {
- if (copy_to_user((void __user *)r0, fd, 2*sizeof(int)))
- error = -EFAULT;
- }
- return error;
-}
-
asmlinkage long sys_mmap2(unsigned long addr, unsigned long len,
unsigned long prot, unsigned long flags,
unsigned long fd, unsigned long pgoff)
Say Y here if you are building a kernel for a desktop, embedded
or real-time system. Say N if you are unsure.
-config PREEMPT_BKL
- bool "Preempt The Big Kernel Lock"
- depends on PREEMPT
- default y
- help
- This option reduces the latency of the kernel by making the
- big kernel lock preemptible.
-
- Say Y here if you are building a kernel for a desktop system.
- Say N if you are unsure.
-
config MN10300_CURRENT_IN_E2
bool "Hold current task address in E2 register"
default y
/* Outbound ranges, one memory and one IO,
* later cannot be changed. Chip supports a second
* IO range but we don't use it for now
+ * From the 440EPx user manual:
+ * PCI 1 Memory 1 8000 0000 1 BFFF FFFF 1GB
+ * I/O 1 E800 0000 1 E800 FFFF 64KB
+ * I/O 1 E880 0000 1 EBFF FFFF 56MB
*/
- ranges = <02000000 0 80000000 1 80000000 0 10000000
- 01000000 0 00000000 1 e8000000 0 00100000>;
+ ranges = <02000000 0 80000000 1 80000000 0 40000000
+ 01000000 0 00000000 1 e8000000 0 00010000
+ 01000000 0 00000000 1 e8800000 0 03800000>;
/* Inbound 2GB range starting at 0 */
dma-ranges = <42000000 0 0 0 0 0 80000000>;
#include <asm/mmu.h>
#include <asm/pgtable.h>
#include <asm/io.h>
-#include <asm/prom.h>
#include <asm/processor.h>
#include <asm/udbg.h>
.machine_check = machine_check_4xx,
.platform = "ppc405",
},
+ { /* default match */
+ .pvr_mask = 0x00000000,
+ .pvr_value = 0x00000000,
+ .cpu_name = "(generic 40x PPC)",
+ .cpu_features = CPU_FTRS_40X,
+ .cpu_user_features = PPC_FEATURE_32 |
+ PPC_FEATURE_HAS_MMU | PPC_FEATURE_HAS_4xxMAC,
+ .icache_bsize = 32,
+ .dcache_bsize = 32,
+ .machine_check = machine_check_4xx,
+ .platform = "ppc405",
+ }
#endif /* CONFIG_40x */
#ifdef CONFIG_44x
.machine_check = machine_check_440A,
.platform = "ppc440",
},
+ { /* default match */
+ .pvr_mask = 0x00000000,
+ .pvr_value = 0x00000000,
+ .cpu_name = "(generic 44x PPC)",
+ .cpu_features = CPU_FTRS_44X,
+ .cpu_user_features = COMMON_USER_BOOKE,
+ .icache_bsize = 32,
+ .dcache_bsize = 32,
+ .machine_check = machine_check_4xx,
+ .platform = "ppc440",
+ }
#endif /* CONFIG_44x */
-#ifdef CONFIG_FSL_BOOKE
#ifdef CONFIG_E200
{ /* e200z5 */
.pvr_mask = 0xfff00000,
.machine_check = machine_check_e200,
.platform = "ppc5554",
},
-#elif defined(CONFIG_E500)
+ { /* default match */
+ .pvr_mask = 0x00000000,
+ .pvr_value = 0x00000000,
+ .cpu_name = "(generic E200 PPC)",
+ .cpu_features = CPU_FTRS_E200,
+ .cpu_user_features = COMMON_USER_BOOKE |
+ PPC_FEATURE_HAS_EFP_SINGLE |
+ PPC_FEATURE_UNIFIED_CACHE,
+ .dcache_bsize = 32,
+ .machine_check = machine_check_e200,
+ .platform = "ppc5554",
+#endif /* CONFIG_E200 */
+#ifdef CONFIG_E500
{ /* e500 */
.pvr_mask = 0xffff0000,
.pvr_value = 0x80200000,
.machine_check = machine_check_e500,
.platform = "ppc8548",
},
-#endif
-#endif
-#if !CLASSIC_PPC
{ /* default match */
.pvr_mask = 0x00000000,
.pvr_value = 0x00000000,
- .cpu_name = "(generic PPC)",
- .cpu_features = CPU_FTRS_GENERIC_32,
- .cpu_user_features = PPC_FEATURE_32,
+ .cpu_name = "(generic E500 PPC)",
+ .cpu_features = CPU_FTRS_E500,
+ .cpu_user_features = COMMON_USER_BOOKE |
+ PPC_FEATURE_HAS_SPE_COMP |
+ PPC_FEATURE_HAS_EFP_SINGLE_COMP,
.icache_bsize = 32,
.dcache_bsize = 32,
+ .machine_check = machine_check_e500,
.platform = "powerpc",
- }
-#endif /* !CLASSIC_PPC */
+#endif /* CONFIG_E500 */
#endif /* CONFIG_PPC32 */
};
rlwimi r10, r11, 0, 26, 26 /* UX = HWEXEC & USER */
rlwimi r12, r10, 0, 26, 31 /* Insert static perms */
- rlwinm r12, r12, 0, 20, 15 /* Clear U0-U3 */
+
+ /*
+ * Clear U0-U3 and WL1 IL1I IL1D IL2I IL2D bits which are added
+ * on newer 440 cores like the 440x6 used on AMCC 460EX/460GT (see
+ * include/asm-powerpc/pgtable-ppc32.h for details).
+ */
+ rlwinm r12, r12, 0, 20, 10
+
tlbwe r12, r13, PPC44x_TLB_ATTRIB /* Write ATTRIB */
/* Done...restore registers and get out of here.
addi r2,r2,0x4000
add r2,r2,r26
- /* Set initial ptr to current */
- LOAD_REG_IMMEDIATE(r4, init_task)
- std r4,PACACURRENT(r13)
-
/* Do very early kernel initializations, including initial hash table,
* stab and slb setup before we turn on relocation. */
if (size > 0x10000)
size = 0x10000;
- printk(KERN_ERR "no ISA IO ranges or unexpected isa range, "
- "mapping 64k\n");
-
__ioremap_at(phb_io_base_phys, (void *)ISA_IO_BASE,
size, _PAGE_NO_CACHE|_PAGE_GUARDED);
return;
void __init early_setup(unsigned long dt_ptr)
{
+ /* -------- printk is _NOT_ safe to use here ! ------- */
+
/* Fill in any unititialised pacas */
initialise_pacas();
/* Assume we're on cpu 0 for now. Don't write to the paca yet! */
setup_paca(0);
- /* Enable early debugging if any specified (see udbg.h) */
- udbg_early_init();
-
/* Initialize lockdep early or else spinlocks will blow */
lockdep_init();
+ /* -------- printk is now safe to use ------- */
+
+ /* Enable early debugging if any specified (see udbg.h) */
+ udbg_early_init();
+
DBG(" -> early_setup(), dt_ptr: 0x%lx\n", dt_ptr);
/*
#include <linux/percpu.h>
#include <linux/types.h>
#include <linux/ioport.h>
+#include <linux/kernel_stat.h>
#include <asm/io.h>
#include <asm/pgtable.h>
"IBM,CBEA-Internal-Interrupt-Controller");
}
+extern int noirqdebug;
+
+static void handle_iic_irq(unsigned int irq, struct irq_desc *desc)
+{
+ const unsigned int cpu = smp_processor_id();
+
+ spin_lock(&desc->lock);
+
+ desc->status &= ~(IRQ_REPLAY | IRQ_WAITING);
+
+ /*
+ * If we're currently running this IRQ, or its disabled,
+ * we shouldn't process the IRQ. Mark it pending, handle
+ * the necessary masking and go out
+ */
+ if (unlikely((desc->status & (IRQ_INPROGRESS | IRQ_DISABLED)) ||
+ !desc->action)) {
+ desc->status |= IRQ_PENDING;
+ goto out_eoi;
+ }
+
+ kstat_cpu(cpu).irqs[irq]++;
+
+ /* Mark the IRQ currently in progress.*/
+ desc->status |= IRQ_INPROGRESS;
+
+ do {
+ struct irqaction *action = desc->action;
+ irqreturn_t action_ret;
+
+ if (unlikely(!action))
+ goto out_eoi;
+
+ desc->status &= ~IRQ_PENDING;
+ spin_unlock(&desc->lock);
+ action_ret = handle_IRQ_event(irq, action);
+ if (!noirqdebug)
+ note_interrupt(irq, desc, action_ret);
+ spin_lock(&desc->lock);
+
+ } while ((desc->status & (IRQ_PENDING | IRQ_DISABLED)) == IRQ_PENDING);
+
+ desc->status &= ~IRQ_INPROGRESS;
+out_eoi:
+ desc->chip->eoi(irq);
+ spin_unlock(&desc->lock);
+}
+
static int iic_host_map(struct irq_host *h, unsigned int virq,
irq_hw_number_t hw)
{
break;
case IIC_IRQ_TYPE_IOEXC:
set_irq_chip_and_handler(virq, &iic_ioexc_chip,
- handle_fasteoi_irq);
+ handle_iic_irq);
break;
default:
- set_irq_chip_and_handler(virq, &iic_chip, handle_fasteoi_irq);
+ set_irq_chip_and_handler(virq, &iic_chip, handle_iic_irq);
}
return 0;
}
if (!test_bit(SPU_CONTEXT_SWITCH_PENDING, &spu->flags))
out_be64(&priv2->mfc_control_RW, MFC_CNTL_RESTART_DMA_COMMAND);
+ else {
+ set_bit(SPU_CONTEXT_FAULT_PENDING, &spu->flags);
+ mb();
+ }
}
static inline void spu_load_slb(struct spu *spu, int slbe, struct spu_slb *slb)
return 0;
}
- spu->class_0_pending = 0;
- spu->dar = ea;
- spu->dsisr = dsisr;
+ spu->class_1_dar = ea;
+ spu->class_1_dsisr = dsisr;
+
+ spu->stop_callback(spu, 1);
- spu->stop_callback(spu);
+ spu->class_1_dar = 0;
+ spu->class_1_dsisr = 0;
return 0;
}
stat = spu_int_stat_get(spu, 0) & mask;
spu->class_0_pending |= stat;
- spu->dsisr = spu_mfc_dsisr_get(spu);
- spu->dar = spu_mfc_dar_get(spu);
+ spu->class_0_dsisr = spu_mfc_dsisr_get(spu);
+ spu->class_0_dar = spu_mfc_dar_get(spu);
spin_unlock(&spu->register_lock);
- spu->stop_callback(spu);
+ spu->stop_callback(spu, 0);
+
+ spu->class_0_pending = 0;
+ spu->class_0_dsisr = 0;
+ spu->class_0_dar = 0;
spu_int_stat_clear(spu, 0, stat);
if (stat & CLASS1_LS_COMPARE_SUSPEND_ON_PUT_INTR)
;
+ spu->class_1_dsisr = 0;
+ spu->class_1_dar = 0;
+
return stat ? IRQ_HANDLED : IRQ_NONE;
}
spu->ibox_callback(spu);
if (stat & CLASS2_SPU_STOP_INTR)
- spu->stop_callback(spu);
+ spu->stop_callback(spu, 2);
if (stat & CLASS2_SPU_HALT_INTR)
- spu->stop_callback(spu);
+ spu->stop_callback(spu, 2);
if (stat & CLASS2_SPU_DMA_TAG_GROUP_COMPLETE_INTR)
spu->mfc_callback(spu);
#include <linux/io.h>
#include <linux/mutex.h>
#include <linux/device.h>
+#include <linux/sched.h>
#include <asm/spu.h>
#include <asm/spu_priv1.h>
static void cpu_affinity_set(struct spu *spu, int cpu)
{
- u64 target = iic_get_target_id(cpu);
- u64 route = target << 48 | target << 32 | target << 16;
+ u64 target;
+ u64 route;
+
+ if (nr_cpus_node(spu->node)) {
+ cpumask_t spumask = node_to_cpumask(spu->node);
+ cpumask_t cpumask = node_to_cpumask(cpu_to_node(cpu));
+
+ if (!cpus_intersects(spumask, cpumask))
+ return;
+ }
+
+ target = iic_get_target_id(cpu);
+ route = target << 48 | target << 32 | target << 16;
out_be64(&spu->priv1->int_route_RW, route);
}
return 0;
if (stat & CLASS0_DMA_ALIGNMENT_INTR)
- spufs_handle_event(ctx, ctx->csa.dar, SPE_EVENT_DMA_ALIGNMENT);
+ spufs_handle_event(ctx, ctx->csa.class_0_dar,
+ SPE_EVENT_DMA_ALIGNMENT);
if (stat & CLASS0_INVALID_DMA_COMMAND_INTR)
- spufs_handle_event(ctx, ctx->csa.dar, SPE_EVENT_INVALID_DMA);
+ spufs_handle_event(ctx, ctx->csa.class_0_dar,
+ SPE_EVENT_INVALID_DMA);
if (stat & CLASS0_SPU_ERROR_INTR)
- spufs_handle_event(ctx, ctx->csa.dar, SPE_EVENT_SPE_ERROR);
+ spufs_handle_event(ctx, ctx->csa.class_0_dar,
+ SPE_EVENT_SPE_ERROR);
+
+ ctx->csa.class_0_pending = 0;
return -EIO;
}
* in time, we can still expect to get the same fault
* the immediately after the context restore.
*/
- ea = ctx->csa.dar;
- dsisr = ctx->csa.dsisr;
+ ea = ctx->csa.class_1_dar;
+ dsisr = ctx->csa.class_1_dsisr;
if (!(dsisr & (MFC_DSISR_PTE_NOT_FOUND | MFC_DSISR_ACCESS_DENIED)))
return 0;
* time slicing will not preempt the context while the page fault
* handler is running. Context switch code removes mappings.
*/
- ctx->csa.dar = ctx->csa.dsisr = 0;
+ ctx->csa.class_1_dar = ctx->csa.class_1_dsisr = 0;
/*
* If we handled the fault successfully and are in runnable
#include <linux/file.h>
#include <linux/fs.h>
+#include <linux/fsnotify.h>
#include <linux/backing-dev.h>
#include <linux/init.h>
#include <linux/ioctl.h>
parent = dir->d_parent->d_inode;
ctx = SPUFS_I(dir->d_inode)->i_ctx;
- mutex_lock(&parent->i_mutex);
+ mutex_lock_nested(&parent->i_mutex, I_MUTEX_PARENT);
ret = spufs_rmdir(parent, dir);
mutex_unlock(&parent->i_mutex);
WARN_ON(ret);
mode &= ~current->fs->umask;
if (flags & SPU_CREATE_GANG)
- return spufs_create_gang(nd->path.dentry->d_inode,
+ ret = spufs_create_gang(nd->path.dentry->d_inode,
dentry, nd->path.mnt, mode);
else
- return spufs_create_context(nd->path.dentry->d_inode,
+ ret = spufs_create_context(nd->path.dentry->d_inode,
dentry, nd->path.mnt, flags, mode,
filp);
+ if (ret >= 0)
+ fsnotify_mkdir(nd->path.dentry->d_inode, dentry);
+ return ret;
out_dput:
dput(dentry);
#include "spufs.h"
/* interrupt-level stop callback function. */
-void spufs_stop_callback(struct spu *spu)
+void spufs_stop_callback(struct spu *spu, int irq)
{
struct spu_context *ctx = spu->ctx;
*/
if (ctx) {
/* Copy exception arguments into module specific structure */
- ctx->csa.class_0_pending = spu->class_0_pending;
- ctx->csa.dsisr = spu->dsisr;
- ctx->csa.dar = spu->dar;
+ switch(irq) {
+ case 0 :
+ ctx->csa.class_0_pending = spu->class_0_pending;
+ ctx->csa.class_0_dsisr = spu->class_0_dsisr;
+ ctx->csa.class_0_dar = spu->class_0_dar;
+ break;
+ case 1 :
+ ctx->csa.class_1_dsisr = spu->class_1_dsisr;
+ ctx->csa.class_1_dar = spu->class_1_dar;
+ break;
+ case 2 :
+ break;
+ }
/* ensure that the exception status has hit memory before a
* thread waiting on the context's stop queue is woken */
wake_up_all(&ctx->stop_wq);
}
-
- /* Clear callback arguments from spu structure */
- spu->class_0_pending = 0;
- spu->dsisr = 0;
- spu->dar = 0;
}
int spu_stopped(struct spu_context *ctx, u32 *stat)
if (!(*stat & SPU_STATUS_RUNNING) && (*stat & stopped))
return 1;
- dsisr = ctx->csa.dsisr;
+ dsisr = ctx->csa.class_0_dsisr;
+ if (dsisr & (MFC_DSISR_PTE_NOT_FOUND | MFC_DSISR_ACCESS_DENIED))
+ return 1;
+
+ dsisr = ctx->csa.class_1_dsisr;
if (dsisr & (MFC_DSISR_PTE_NOT_FOUND | MFC_DSISR_ACCESS_DENIED))
return 1;
u32 ls_pointer, npc;
void __iomem *ls;
long spu_ret;
- int ret, ret2;
+ int ret;
/* get syscall block from local store */
npc = ctx->ops->npc_read(ctx) & ~3;
if (spu_ret <= -ERESTARTSYS) {
ret = spu_handle_restartsys(ctx, &spu_ret, &npc);
}
- ret2 = spu_acquire(ctx);
+ mutex_lock(&ctx->state_mutex);
if (ret == -ERESTARTSYS)
return ret;
- if (ret2)
- return -EINTR;
}
/* need to re-get the ls, as it may have changed when we released the
if (mutex_lock_interruptible(&ctx->run_mutex))
return -ERESTARTSYS;
- spu_enable_spu(ctx);
ctx->event_return = 0;
ret = spu_acquire(ctx);
if (ret)
goto out_unlock;
+ spu_enable_spu(ctx);
+
spu_update_sched_info(ctx);
ret = spu_run_init(ctx, npc);
* if it is timesliced or preempted.
*/
ctx->cpus_allowed = current->cpus_allowed;
+
+ /* Save the current cpu id for spu interrupt routing. */
+ ctx->last_ran = raw_smp_processor_id();
}
void spu_update_sched_info(struct spu_context *ctx)
spu_switch_log_notify(spu, ctx, SWITCH_LOG_START, 0);
spu_restore(&ctx->csa, spu);
spu->timestamp = jiffies;
- spu_cpu_affinity_set(spu, raw_smp_processor_id());
spu_switch_notify(spu, ctx);
ctx->state = SPU_STATE_RUNNABLE;
victim->stats.invol_ctx_switch++;
spu->stats.invol_ctx_switch++;
- spu_add_to_rq(victim);
+ if (test_bit(SPU_SCHED_SPU_RUN, &ctx->sched_flags))
+ spu_add_to_rq(victim);
mutex_unlock(&victim->state_mutex);
cpumask_t cpus_allowed;
int policy;
int prio;
+ int last_ran;
/* statistics */
struct {
/* irq callback funcs. */
void spufs_ibox_callback(struct spu *spu);
void spufs_wbox_callback(struct spu *spu);
-void spufs_stop_callback(struct spu *spu);
+void spufs_stop_callback(struct spu *spu, int irq);
void spufs_mfc_callback(struct spu *spu);
void spufs_dma_callback(struct spu *spu, int type);
spu_int_mask_set(spu, 2, 0ul);
eieio();
spin_unlock_irq(&spu->register_lock);
+
+ /*
+ * This flag needs to be set before calling synchronize_irq so
+ * that the update will be visible to the relevant handlers
+ * via a simple load.
+ */
+ set_bit(SPU_CONTEXT_SWITCH_PENDING, &spu->flags);
+ clear_bit(SPU_CONTEXT_FAULT_PENDING, &spu->flags);
synchronize_irq(spu->irqs[0]);
synchronize_irq(spu->irqs[1]);
synchronize_irq(spu->irqs[2]);
/* Save, Step 7:
* Restore, Step 5:
* Set a software context switch pending flag.
+ * Done above in Step 3 - disable_interrupts().
*/
- set_bit(SPU_CONTEXT_SWITCH_PENDING, &spu->flags);
- mb();
}
static inline void save_mfc_cntl(struct spu_state *csa, struct spu *spu)
MFC_CNTL_SUSPEND_COMPLETE);
/* fall through */
case MFC_CNTL_SUSPEND_COMPLETE:
- if (csa) {
+ if (csa)
csa->priv2.mfc_control_RW =
- MFC_CNTL_SUSPEND_MASK |
+ in_be64(&priv2->mfc_control_RW) |
MFC_CNTL_SUSPEND_DMA_QUEUE;
- }
break;
case MFC_CNTL_NORMAL_DMA_QUEUE_OPERATION:
out_be64(&priv2->mfc_control_RW, MFC_CNTL_SUSPEND_DMA_QUEUE);
POLL_WHILE_FALSE((in_be64(&priv2->mfc_control_RW) &
MFC_CNTL_SUSPEND_DMA_STATUS_MASK) ==
MFC_CNTL_SUSPEND_COMPLETE);
- if (csa) {
- csa->priv2.mfc_control_RW = 0;
- }
+ if (csa)
+ csa->priv2.mfc_control_RW =
+ in_be64(&priv2->mfc_control_RW) &
+ ~MFC_CNTL_SUSPEND_DMA_QUEUE &
+ ~MFC_CNTL_SUSPEND_MASK;
break;
}
}
}
}
-static inline void save_mfc_decr(struct spu_state *csa, struct spu *spu)
+static inline void save_mfc_stopped_status(struct spu_state *csa,
+ struct spu *spu)
{
struct spu_priv2 __iomem *priv2 = spu->priv2;
+ const u64 mask = MFC_CNTL_DECREMENTER_RUNNING |
+ MFC_CNTL_DMA_QUEUES_EMPTY;
/* Save, Step 12:
* Read MFC_CNTL[Ds]. Update saved copy of
* CSA.MFC_CNTL[Ds].
+ *
+ * update: do the same with MFC_CNTL[Q].
*/
- csa->priv2.mfc_control_RW |=
- in_be64(&priv2->mfc_control_RW) & MFC_CNTL_DECREMENTER_RUNNING;
+ csa->priv2.mfc_control_RW &= ~mask;
+ csa->priv2.mfc_control_RW |= in_be64(&priv2->mfc_control_RW) & mask;
}
static inline void halt_mfc_decr(struct spu_state *csa, struct spu *spu)
* Restore, Step 14.
* Write MFC_CNTL[Pc]=1 (purge queue).
*/
- out_be64(&priv2->mfc_control_RW, MFC_CNTL_PURGE_DMA_REQUEST);
+ out_be64(&priv2->mfc_control_RW,
+ MFC_CNTL_PURGE_DMA_REQUEST |
+ MFC_CNTL_SUSPEND_MASK);
eieio();
}
/* Save, Step 48:
* Restore, Step 23.
* Change the software context switch pending flag
- * to context switch active.
+ * to context switch active. This implementation does
+ * not uses a switch active flag.
*
- * This implementation does not uses a switch active flag.
+ * Now that we have saved the mfc in the csa, we can add in the
+ * restart command if an exception occurred.
*/
+ if (test_bit(SPU_CONTEXT_FAULT_PENDING, &spu->flags))
+ csa->priv2.mfc_control_RW |= MFC_CNTL_RESTART_DMA_COMMAND;
clear_bit(SPU_CONTEXT_SWITCH_PENDING, &spu->flags);
mb();
}
eieio();
}
+static inline void set_int_route(struct spu_state *csa, struct spu *spu)
+{
+ struct spu_context *ctx = spu->ctx;
+
+ spu_cpu_affinity_set(spu, ctx->last_ran);
+}
+
static inline void restore_other_spu_access(struct spu_state *csa,
struct spu *spu)
{
*/
out_be64(&priv2->mfc_control_RW, csa->priv2.mfc_control_RW);
eieio();
+
/*
- * FIXME: this is to restart a DMA that we were processing
- * before the save. better remember the fault information
- * in the csa instead.
+ * The queue is put back into the same state that was evident prior to
+ * the context switch. The suspend flag is added to the saved state in
+ * the csa, if the operational state was suspending or suspended. In
+ * this case, the code that suspended the mfc is responsible for
+ * continuing it. Note that SPE faults do not change the operational
+ * state of the spu.
*/
- if ((csa->priv2.mfc_control_RW & MFC_CNTL_SUSPEND_DMA_QUEUE_MASK)) {
- out_be64(&priv2->mfc_control_RW, MFC_CNTL_RESTART_DMA_COMMAND);
- eieio();
- }
}
static inline void enable_user_access(struct spu_state *csa, struct spu *spu)
save_spu_runcntl(prev, spu); /* Step 9. */
save_mfc_sr1(prev, spu); /* Step 10. */
save_spu_status(prev, spu); /* Step 11. */
- save_mfc_decr(prev, spu); /* Step 12. */
+ save_mfc_stopped_status(prev, spu); /* Step 12. */
halt_mfc_decr(prev, spu); /* Step 13. */
save_timebase(prev, spu); /* Step 14. */
remove_other_spu_access(prev, spu); /* Step 15. */
check_ppuint_mb_stat(next, spu); /* Step 67. */
spu_invalidate_slbs(spu); /* Modified Step 68. */
restore_mfc_sr1(next, spu); /* Step 69. */
+ set_int_route(next, spu); /* NEW */
restore_other_spu_access(next, spu); /* Step 70. */
restore_spu_runcntl(next, spu); /* Step 71. */
restore_mfc_cntl(next, spu); /* Step 72. */
static struct mv643xx_eth_platform_data eth0_pd = {
+ .shared = &mv643xx_eth_shared_device,
.port_number = 0,
+
.tx_sram_addr = PEGASOS2_SRAM_BASE_ETH0,
.tx_sram_size = PEGASOS2_SRAM_TXRING_SIZE,
.tx_queue_size = PEGASOS2_SRAM_TXRING_SIZE/16,
};
static struct mv643xx_eth_platform_data eth1_pd = {
+ .shared = &mv643xx_eth_shared_device,
.port_number = 1,
+
.tx_sram_addr = PEGASOS2_SRAM_BASE_ETH1,
.tx_sram_size = PEGASOS2_SRAM_TXRING_SIZE,
.tx_queue_size = PEGASOS2_SRAM_TXRING_SIZE/16,
memset(&pdata, 0, sizeof(pdata));
+ pdata.shared = shared_pdev;
+
prop = of_get_property(np, "reg", NULL);
if (!prop)
return -ENODEV;
resource_size_t size = res->end - res->start + 1;
u64 sa;
- /* Calculate window size */
- sa = (0xffffffffffffffffull << ilog2(size));;
- if (res->flags & IORESOURCE_PREFETCH)
- sa |= 0x8;
+ if (port->endpoint) {
+ resource_size_t ep_addr = 0;
+ resource_size_t ep_size = 32 << 20;
+
+ /* Currently we map a fixed 64MByte window to PLB address
+ * 0 (SDRAM). This should probably be configurable via a dts
+ * property.
+ */
+
+ /* Calculate window size */
+ sa = (0xffffffffffffffffull << ilog2(ep_size));;
+
+ /* Setup BAR0 */
+ out_le32(mbase + PECFG_BAR0HMPA, RES_TO_U32_HIGH(sa));
+ out_le32(mbase + PECFG_BAR0LMPA, RES_TO_U32_LOW(sa) |
+ PCI_BASE_ADDRESS_MEM_TYPE_64);
- out_le32(mbase + PECFG_BAR0HMPA, RES_TO_U32_HIGH(sa));
- out_le32(mbase + PECFG_BAR0LMPA, RES_TO_U32_LOW(sa));
+ /* Disable BAR1 & BAR2 */
+ out_le32(mbase + PECFG_BAR1MPA, 0);
+ out_le32(mbase + PECFG_BAR2HMPA, 0);
+ out_le32(mbase + PECFG_BAR2LMPA, 0);
- /* The setup of the split looks weird to me ... let's see if it works */
- out_le32(mbase + PECFG_PIM0LAL, 0x00000000);
- out_le32(mbase + PECFG_PIM0LAH, 0x00000000);
- out_le32(mbase + PECFG_PIM1LAL, 0x00000000);
- out_le32(mbase + PECFG_PIM1LAH, 0x00000000);
- out_le32(mbase + PECFG_PIM01SAH, 0xffff0000);
- out_le32(mbase + PECFG_PIM01SAL, 0x00000000);
+ out_le32(mbase + PECFG_PIM01SAH, RES_TO_U32_HIGH(sa));
+ out_le32(mbase + PECFG_PIM01SAL, RES_TO_U32_LOW(sa));
+
+ out_le32(mbase + PCI_BASE_ADDRESS_0, RES_TO_U32_LOW(ep_addr));
+ out_le32(mbase + PCI_BASE_ADDRESS_1, RES_TO_U32_HIGH(ep_addr));
+ } else {
+ /* Calculate window size */
+ sa = (0xffffffffffffffffull << ilog2(size));;
+ if (res->flags & IORESOURCE_PREFETCH)
+ sa |= 0x8;
+
+ out_le32(mbase + PECFG_BAR0HMPA, RES_TO_U32_HIGH(sa));
+ out_le32(mbase + PECFG_BAR0LMPA, RES_TO_U32_LOW(sa));
+
+ /* The setup of the split looks weird to me ... let's see
+ * if it works
+ */
+ out_le32(mbase + PECFG_PIM0LAL, 0x00000000);
+ out_le32(mbase + PECFG_PIM0LAH, 0x00000000);
+ out_le32(mbase + PECFG_PIM1LAL, 0x00000000);
+ out_le32(mbase + PECFG_PIM1LAH, 0x00000000);
+ out_le32(mbase + PECFG_PIM01SAH, 0xffff0000);
+ out_le32(mbase + PECFG_PIM01SAL, 0x00000000);
+
+ out_le32(mbase + PCI_BASE_ADDRESS_0, RES_TO_U32_LOW(res->start));
+ out_le32(mbase + PCI_BASE_ADDRESS_1, RES_TO_U32_HIGH(res->start));
+ }
/* Enable inbound mapping */
out_le32(mbase + PECFG_PIMEN, 0x1);
- out_le32(mbase + PCI_BASE_ADDRESS_0, RES_TO_U32_LOW(res->start));
- out_le32(mbase + PCI_BASE_ADDRESS_1, RES_TO_U32_HIGH(res->start));
-
/* Enable I/O, Mem, and Busmaster cycles */
out_le16(mbase + PCI_COMMAND,
in_le16(mbase + PCI_COMMAND) |
const int *bus_range;
int primary = 0, busses;
void __iomem *mbase = NULL, *cfg_data = NULL;
-
- /* XXX FIXME: Handle endpoint mode properly */
- if (port->endpoint) {
- printk(KERN_WARNING "PCIE%d: Port in endpoint mode !\n",
- port->index);
- return;
- }
+ const u32 *pval;
+ u32 val;
/* Check if primary bridge */
if (of_get_property(port->node, "primary", NULL))
hose->last_busno = hose->first_busno + busses;
}
- /* We map the external config space in cfg_data and the host config
- * space in cfg_addr. External space is 1M per bus, internal space
- * is 4K
+ if (!port->endpoint) {
+ /* Only map the external config space in cfg_data for
+ * PCIe root-complexes. External space is 1M per bus
+ */
+ cfg_data = ioremap(port->cfg_space.start +
+ (hose->first_busno + 1) * 0x100000,
+ busses * 0x100000);
+ if (cfg_data == NULL) {
+ printk(KERN_ERR "%s: Can't map external config space !",
+ port->node->full_name);
+ goto fail;
+ }
+ hose->cfg_data = cfg_data;
+ }
+
+ /* Always map the host config space in cfg_addr.
+ * Internal space is 4K
*/
- cfg_data = ioremap(port->cfg_space.start +
- (hose->first_busno + 1) * 0x100000,
- busses * 0x100000);
mbase = ioremap(port->cfg_space.start + 0x10000000, 0x1000);
- if (cfg_data == NULL || mbase == NULL) {
- printk(KERN_ERR "%s: Can't map config space !",
+ if (mbase == NULL) {
+ printk(KERN_ERR "%s: Can't map internal config space !",
port->node->full_name);
goto fail;
}
-
- hose->cfg_data = cfg_data;
hose->cfg_addr = mbase;
pr_debug("PCIE %s, bus %d..%d\n", port->node->full_name,
port->hose = hose;
mbase = (void __iomem *)hose->cfg_addr;
- /*
- * Set bus numbers on our root port
- */
- out_8(mbase + PCI_PRIMARY_BUS, hose->first_busno);
- out_8(mbase + PCI_SECONDARY_BUS, hose->first_busno + 1);
- out_8(mbase + PCI_SUBORDINATE_BUS, hose->last_busno);
+ if (!port->endpoint) {
+ /*
+ * Set bus numbers on our root port
+ */
+ out_8(mbase + PCI_PRIMARY_BUS, hose->first_busno);
+ out_8(mbase + PCI_SECONDARY_BUS, hose->first_busno + 1);
+ out_8(mbase + PCI_SUBORDINATE_BUS, hose->last_busno);
+ }
/*
* OMRs are already reset, also disable PIMs
ppc4xx_configure_pciex_PIMs(port, hose, mbase, &dma_window);
/* The root complex doesn't show up if we don't set some vendor
- * and device IDs into it. Those are the same bogus one that the
- * initial code in arch/ppc add. We might want to change that.
+ * and device IDs into it. The defaults below are the same bogus
+ * one that the initial code in arch/ppc had. This can be
+ * overwritten by setting the "vendor-id/device-id" properties
+ * in the pciex node.
*/
- out_le16(mbase + 0x200, 0xaaa0 + port->index);
- out_le16(mbase + 0x202, 0xbed0 + port->index);
- /* Set Class Code to PCI-PCI bridge and Revision Id to 1 */
- out_le32(mbase + 0x208, 0x06040001);
+ /* Get the (optional) vendor-/device-id from the device-tree */
+ pval = of_get_property(port->node, "vendor-id", NULL);
+ if (pval) {
+ val = *pval;
+ } else {
+ if (!port->endpoint)
+ val = 0xaaa0 + port->index;
+ else
+ val = 0xeee0 + port->index;
+ }
+ out_le16(mbase + 0x200, val);
+
+ pval = of_get_property(port->node, "device-id", NULL);
+ if (pval) {
+ val = *pval;
+ } else {
+ if (!port->endpoint)
+ val = 0xbed0 + port->index;
+ else
+ val = 0xfed0 + port->index;
+ }
+ out_le16(mbase + 0x202, val);
+
+ if (!port->endpoint) {
+ /* Set Class Code to PCI-PCI bridge and Revision Id to 1 */
+ out_le32(mbase + 0x208, 0x06040001);
+
+ printk(KERN_INFO "PCIE%d: successfully set as root-complex\n",
+ port->index);
+ } else {
+ /* Set Class Code to Processor/PPC */
+ out_le32(mbase + 0x208, 0x0b200001);
+
+ printk(KERN_INFO "PCIE%d: successfully set as endpoint\n",
+ port->index);
+ }
- printk(KERN_INFO "PCIE%d: successfully set as root-complex\n",
- port->index);
return;
fail:
if (hose)
const u32 *pval;
int portno;
unsigned int dcrs;
+ const char *val;
/* First, proceed to core initialization as we assume there's
* only one PCIe core in the system
}
port->sdr_base = *pval;
- /* XXX Currently, we only support root complex mode */
- port->endpoint = 0;
+ /* Check if device_type property is set to "pci" or "pci-endpoint".
+ * Resulting from this setup this PCIe port will be configured
+ * as root-complex or as endpoint.
+ */
+ val = of_get_property(port->node, "device_type", NULL);
+ if (!strcmp(val, "pci-endpoint")) {
+ port->endpoint = 1;
+ } else if (!strcmp(val, "pci")) {
+ port->endpoint = 0;
+ } else {
+ printk(KERN_ERR "PCIE: missing or incorrect device_type for %s\n",
+ np->full_name);
+ return;
+ }
/* Fetch config space registers address */
if (of_address_to_resource(np, 0, &port->cfg_space)) {
DUMP_FIELD(spu, "0x%lx", ls_size);
DUMP_FIELD(spu, "0x%x", node);
DUMP_FIELD(spu, "0x%lx", flags);
- DUMP_FIELD(spu, "0x%lx", dar);
- DUMP_FIELD(spu, "0x%lx", dsisr);
DUMP_FIELD(spu, "%d", class_0_pending);
+ DUMP_FIELD(spu, "0x%lx", class_0_dar);
+ DUMP_FIELD(spu, "0x%lx", class_0_dsisr);
+ DUMP_FIELD(spu, "0x%lx", class_1_dar);
+ DUMP_FIELD(spu, "0x%lx", class_1_dsisr);
DUMP_FIELD(spu, "0x%lx", irqs[0]);
DUMP_FIELD(spu, "0x%lx", irqs[1]);
DUMP_FIELD(spu, "0x%lx", irqs[2]);
};
static struct mv643xx_eth_platform_data eth0_pd = {
+ .shared = &mv64x60_eth_shared_device;
.port_number = 0,
};
};
static struct mv643xx_eth_platform_data eth1_pd = {
+ .shared = &mv64x60_eth_shared_device;
.port_number = 1,
};
};
static struct mv643xx_eth_platform_data eth2_pd = {
+ .shared = &mv64x60_eth_shared_device;
.port_number = 2,
};
Select this option to enable the special message interface to
the cooperative memory management.
+config PAGE_STATES
+ bool "Unused page notification"
+ help
+ This enables the notification of unused pages to the
+ hypervisor. The ESSA instruction is used to do the states
+ changes between a page that has content and the unused state.
+
config VIRT_TIMER
bool "Virtual CPU timer support"
help
lgfr %r3,%r3 # long
llgtr %r4,%r4 # long
llgfr %r5,%r5 # long
- jg sys_ptrace # branch to system call
+ jg compat_sys_ptrace # branch to system call
.globl sys32_alarm_wrapper
sys32_alarm_wrapper:
st %r2,SP_R2(%r15) # store return value (change R2 on stack)
sysc_return:
- tm SP_PSW+1(%r15),0x01 # returning to user ?
- bno BASED(sysc_restore)
tm __TI_flags+3(%r9),_TIF_WORK_SVC
bnz BASED(sysc_work) # there is work to do (signals etc.)
sysc_restore:
# One of the work bits is on. Find out which one.
#
sysc_work:
+ tm SP_PSW+1(%r15),0x01 # returning to user ?
+ bno BASED(sysc_restore)
tm __TI_flags+3(%r9),_TIF_MCCK_PENDING
bo BASED(sysc_mcck_pending)
tm __TI_flags+3(%r9),_TIF_NEED_RESCHED
la %r2,SP_PTREGS(%r15) # address of register-save area
basr %r14,%r1 # branch to standard irq handler
io_return:
- tm SP_PSW+1(%r15),0x01 # returning to user ?
-#ifdef CONFIG_PREEMPT
- bno BASED(io_preempt) # no -> check for preemptive scheduling
-#else
- bno BASED(io_restore) # no-> skip resched & signal
-#endif
tm __TI_flags+3(%r9),_TIF_WORK_INT
bnz BASED(io_work) # there is work to do (signals etc.)
io_restore:
.long 0, io_restore_trace + 0x80000000
#endif
-#ifdef CONFIG_PREEMPT
-io_preempt:
+#
+# switch to kernel stack, then check the TIF bits
+#
+io_work:
+ tm SP_PSW+1(%r15),0x01 # returning to user ?
+#ifndef CONFIG_PREEMPT
+ bno BASED(io_restore) # no-> skip resched & signal
+#else
+ bnz BASED(io_work_user) # no -> check for preemptive scheduling
+ # check for preemptive scheduling
icm %r0,15,__TI_precount(%r9)
- bnz BASED(io_restore)
+ bnz BASED(io_restore) # preemption disabled
l %r1,SP_R15(%r15)
s %r1,BASED(.Lc_spsize)
mvc SP_PTREGS(__PT_SIZE,%r1),SP_PTREGS(%r15)
br %r1 # call schedule
#endif
-#
-# switch to kernel stack, then check the TIF bits
-#
-io_work:
+io_work_user:
l %r1,__LC_KERNEL_STACK
s %r1,BASED(.Lc_spsize)
mvc SP_PTREGS(__PT_SIZE,%r1),SP_PTREGS(%r15)
stg %r2,SP_R2(%r15) # store return value (change R2 on stack)
sysc_return:
- tm SP_PSW+1(%r15),0x01 # returning to user ?
- jno sysc_restore
tm __TI_flags+7(%r9),_TIF_WORK_SVC
jnz sysc_work # there is work to do (signals etc.)
sysc_restore:
# One of the work bits is on. Find out which one.
#
sysc_work:
+ tm SP_PSW+1(%r15),0x01 # returning to user ?
+ jno sysc_restore
tm __TI_flags+7(%r9),_TIF_MCCK_PENDING
jo sysc_mcck_pending
tm __TI_flags+7(%r9),_TIF_NEED_RESCHED
la %r2,SP_PTREGS(%r15) # address of register-save area
brasl %r14,do_IRQ # call standard irq handler
io_return:
- tm SP_PSW+1(%r15),0x01 # returning to user ?
-#ifdef CONFIG_PREEMPT
- jno io_preempt # no -> check for preemptive scheduling
-#else
- jno io_restore # no-> skip resched & signal
-#endif
tm __TI_flags+7(%r9),_TIF_WORK_INT
jnz io_work # there is work to do (signals etc.)
io_restore:
.quad 0, io_restore_trace
#endif
-#ifdef CONFIG_PREEMPT
-io_preempt:
+#
+# There is work todo, we need to check if we return to userspace, then
+# check, if we are in SIE, if yes leave it
+#
+io_work:
+ tm SP_PSW+1(%r15),0x01 # returning to user ?
+#ifndef CONFIG_PREEMPT
+#if defined(CONFIG_KVM) || defined(CONFIG_KVM_MODULE)
+ jnz io_work_user # yes -> no need to check for SIE
+ la %r1, BASED(sie_opcode) # we return to kernel here
+ lg %r2, SP_PSW+8(%r15)
+ clc 0(2,%r1), 0(%r2) # is current instruction = SIE?
+ jne io_restore # no-> return to kernel
+ lg %r1, SP_PSW+8(%r15) # yes-> add 4 bytes to leave SIE
+ aghi %r1, 4
+ stg %r1, SP_PSW+8(%r15)
+ j io_restore # return to kernel
+#else
+ jno io_restore # no-> skip resched & signal
+#endif
+#else
+ jnz io_work_user # yes -> do resched & signal
+#if defined(CONFIG_KVM) || defined(CONFIG_KVM_MODULE)
+ la %r1, BASED(sie_opcode)
+ lg %r2, SP_PSW+8(%r15)
+ clc 0(2,%r1), 0(%r2) # is current instruction = SIE?
+ jne 0f # no -> leave PSW alone
+ lg %r1, SP_PSW+8(%r15) # yes-> add 4 bytes to leave SIE
+ aghi %r1, 4
+ stg %r1, SP_PSW+8(%r15)
+0:
+#endif
+ # check for preemptive scheduling
icm %r0,15,__TI_precount(%r9)
- jnz io_restore
+ jnz io_restore # preemption is disabled
# switch to kernel stack
lg %r1,SP_R15(%r15)
aghi %r1,-SP_SIZE
jg preempt_schedule_irq
#endif
-#
-# switch to kernel stack, then check TIF bits
-#
-io_work:
+io_work_user:
lg %r1,__LC_KERNEL_STACK
aghi %r1,-SP_SIZE
mvc SP_PTREGS(__PT_SIZE,%r1),SP_PTREGS(%r15)
j io_restore
io_work_done:
+#if defined(CONFIG_KVM) || defined(CONFIG_KVM_MODULE)
+sie_opcode:
+ .long 0xb2140000
+#endif
+
#
# _TIF_MCCK_PENDING is set, call handler
#
return 0;
}
-static int
-do_ptrace_normal(struct task_struct *child, long request, long addr, long data)
+long arch_ptrace(struct task_struct *child, long request, long addr, long data)
{
ptrace_area parea;
int copied, ret;
return 0;
}
-static int
-do_ptrace_emu31(struct task_struct *child, long request, long addr, long data)
+long compat_arch_ptrace(struct task_struct *child, compat_long_t request,
+ compat_ulong_t caddr, compat_ulong_t cdata)
{
- unsigned int tmp; /* 4 bytes !! */
+ unsigned long addr = caddr;
+ unsigned long data = cdata;
ptrace_area_emu31 parea;
int copied, ret;
switch (request) {
- case PTRACE_PEEKTEXT:
- case PTRACE_PEEKDATA:
- /* read word at location addr. */
- copied = access_process_vm(child, addr, &tmp, sizeof(tmp), 0);
- if (copied != sizeof(tmp))
- return -EIO;
- return put_user(tmp, (unsigned int __force __user *) data);
-
case PTRACE_PEEKUSR:
/* read the word at location addr in the USER area. */
return peek_user_emu31(child, addr, data);
- case PTRACE_POKETEXT:
- case PTRACE_POKEDATA:
- /* write the word at location addr. */
- tmp = data;
- copied = access_process_vm(child, addr, &tmp, sizeof(tmp), 1);
- if (copied != sizeof(tmp))
- return -EIO;
- return 0;
-
case PTRACE_POKEUSR:
/* write the word at location addr in the USER area */
return poke_user_emu31(child, addr, data);
copied += sizeof(unsigned int);
}
return 0;
- case PTRACE_GETEVENTMSG:
- return put_user((__u32) child->ptrace_message,
- (unsigned int __force __user *) data);
- case PTRACE_GETSIGINFO:
- if (child->last_siginfo == NULL)
- return -EINVAL;
- return copy_siginfo_to_user32((compat_siginfo_t
- __force __user *) data,
- child->last_siginfo);
- case PTRACE_SETSIGINFO:
- if (child->last_siginfo == NULL)
- return -EINVAL;
- return copy_siginfo_from_user32(child->last_siginfo,
- (compat_siginfo_t
- __force __user *) data);
}
- return ptrace_request(child, request, addr, data);
+ return compat_ptrace_request(child, request, addr, data);
}
#endif
-long arch_ptrace(struct task_struct *child, long request, long addr, long data)
-{
- switch (request) {
- case PTRACE_SYSCALL:
- /* continue and stop at next (return from) syscall */
- case PTRACE_CONT:
- /* restart after signal. */
- if (!valid_signal(data))
- return -EIO;
- if (request == PTRACE_SYSCALL)
- set_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
- else
- clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
- child->exit_code = data;
- /* make sure the single step bit is not set. */
- user_disable_single_step(child);
- wake_up_process(child);
- return 0;
-
- case PTRACE_KILL:
- /*
- * make the child exit. Best I can do is send it a sigkill.
- * perhaps it should be put in the status that it wants to
- * exit.
- */
- if (child->exit_state == EXIT_ZOMBIE) /* already dead */
- return 0;
- child->exit_code = SIGKILL;
- /* make sure the single step bit is not set. */
- user_disable_single_step(child);
- wake_up_process(child);
- return 0;
-
- case PTRACE_SINGLESTEP:
- /* set the trap flag. */
- if (!valid_signal(data))
- return -EIO;
- clear_tsk_thread_flag(child, TIF_SYSCALL_TRACE);
- child->exit_code = data;
- user_enable_single_step(child);
- /* give it a chance to run. */
- wake_up_process(child);
- return 0;
-
- /* Do requests that differ for 31/64 bit */
- default:
-#ifdef CONFIG_COMPAT
- if (test_thread_flag(TIF_31BIT))
- return do_ptrace_emu31(child, request, addr, data);
-#endif
- return do_ptrace_normal(child, request, addr, data);
- }
- /* Not reached. */
- return -EIO;
-}
-
asmlinkage void
syscall_trace(struct pt_regs *regs, int entryexit)
{
select PREEMPT_NOTIFIERS
select ANON_INODES
select S390_SWITCH_AMODE
- select PREEMPT
---help---
Support hosting paravirtualized guest machines using the SIE
virtualization capability on the mainframe. This should work
static int handle_noop(struct kvm_vcpu *vcpu)
{
switch (vcpu->arch.sie_block->icptcode) {
+ case 0x0:
+ vcpu->stat.exit_null++;
+ break;
case 0x10:
vcpu->stat.exit_external_request++;
break;
struct kvm_stats_debugfs_item debugfs_entries[] = {
{ "userspace_handled", VCPU_STAT(exit_userspace) },
+ { "exit_null", VCPU_STAT(exit_null) },
{ "exit_validity", VCPU_STAT(exit_validity) },
{ "exit_stop_request", VCPU_STAT(exit_stop_request) },
{ "exit_external_request", VCPU_STAT(exit_external_request) },
vcpu->arch.guest_fpregs.fpc &= FPC_VALID_MASK;
restore_fp_regs(&vcpu->arch.guest_fpregs);
restore_access_regs(vcpu->arch.guest_acrs);
-
- if (signal_pending(current))
- atomic_set_mask(CPUSTAT_STOP_INT,
- &vcpu->arch.sie_block->cpuflags);
}
void kvm_arch_vcpu_put(struct kvm_vcpu *vcpu)
obj-y := init.o fault.o extmem.o mmap.o vmem.o pgtable.o
obj-$(CONFIG_CMM) += cmm.o
obj-$(CONFIG_HUGETLB_PAGE) += hugetlbpage.o
+obj-$(CONFIG_PAGE_STATES) += page-states.o
/* clear the zero-page */
memset(empty_zero_page, 0, PAGE_SIZE);
+ /* Setup guest page hinting */
+ cmma_init();
+
/* this will put all low memory onto the freelists */
totalram_pages += free_all_bootmem();
--- /dev/null
+/*
+ * arch/s390/mm/page-states.c
+ *
+ * Copyright IBM Corp. 2008
+ *
+ * Guest page hinting for unused pages.
+ *
+ * Author(s): Martin Schwidefsky <schwidefsky@de.ibm.com>
+ */
+
+#include <linux/kernel.h>
+#include <linux/errno.h>
+#include <linux/types.h>
+#include <linux/mm.h>
+#include <linux/init.h>
+
+#define ESSA_SET_STABLE 1
+#define ESSA_SET_UNUSED 2
+
+static int cmma_flag;
+
+static int __init cmma(char *str)
+{
+ char *parm;
+ parm = strstrip(str);
+ if (strcmp(parm, "yes") == 0 || strcmp(parm, "on") == 0) {
+ cmma_flag = 1;
+ return 1;
+ }
+ cmma_flag = 0;
+ if (strcmp(parm, "no") == 0 || strcmp(parm, "off") == 0)
+ return 1;
+ return 0;
+}
+
+__setup("cmma=", cmma);
+
+void __init cmma_init(void)
+{
+ register unsigned long tmp asm("0") = 0;
+ register int rc asm("1") = -EOPNOTSUPP;
+
+ if (!cmma_flag)
+ return;
+ asm volatile(
+ " .insn rrf,0xb9ab0000,%1,%1,0,0\n"
+ "0: la %0,0\n"
+ "1:\n"
+ EX_TABLE(0b,1b)
+ : "+&d" (rc), "+&d" (tmp));
+ if (rc)
+ cmma_flag = 0;
+}
+
+void arch_free_page(struct page *page, int order)
+{
+ int i, rc;
+
+ if (!cmma_flag)
+ return;
+ for (i = 0; i < (1 << order); i++)
+ asm volatile(".insn rrf,0xb9ab0000,%0,%1,%2,0"
+ : "=&d" (rc)
+ : "a" ((page_to_pfn(page) + i) << PAGE_SHIFT),
+ "i" (ESSA_SET_UNUSED));
+}
+
+void arch_alloc_page(struct page *page, int order)
+{
+ int i, rc;
+
+ if (!cmma_flag)
+ return;
+ for (i = 0; i < (1 << order); i++)
+ asm volatile(".insn rrf,0xb9ab0000,%0,%1,%2,0"
+ : "=&d" (rc)
+ : "a" ((page_to_pfn(page) + i) << PAGE_SHIFT),
+ "i" (ESSA_SET_STABLE));
+}
Select Dreamcast if configuring for a SEGA Dreamcast.
More information at <http://www.linux-sh.org>
-config SH_MPC1211
- bool "Interface MPC1211"
- depends on CPU_SUBTYPE_SH7751 && BROKEN
- help
- CTP/PCI-SH02 is a CPU module computer that is produced
- by Interface Corporation.
- More information at <http://www.interface.co.jp>
-
config SH_SH03
bool "Interface CTP/PCI-SH03"
depends on CPU_SUBTYPE_SH7751
endmenu
config ISA_DMA_API
- def_bool y
- depends on SH_MPC1211
+ bool
menu "Kernel features"
config KEXEC
bool "kexec system call (EXPERIMENTAL)"
- depends on EXPERIMENTAL
+ depends on SUPERH32 && EXPERIMENTAL
help
kexec is a system call that implements the ability to shutdown your
current kernel, and to start another kernel. It is like a reboot
config CRASH_DUMP
bool "kernel crash dumps (EXPERIMENTAL)"
- depends on EXPERIMENTAL
+ depends on SUPERH32 && EXPERIMENTAL
help
Generate crash dump after being started by kexec.
This should be normally only set in special crash dump kernels
config ZERO_PAGE_OFFSET
hex "Zero page offset"
- default "0x00004000" if SH_MPC1211 || SH_SH03
+ default "0x00004000" if SH_SH03
default "0x00010000" if PAGE_SIZE_64KB
default "0x00002000" if PAGE_SIZE_8KB
default "0x00001000"
config SH_STANDARD_BIOS
bool "Use LinuxSH standard BIOS"
+ depends on SUPERH32
help
Say Y here if your target has the gdb-sh-stub
package from www.m17n.org (or any conforming standard LinuxSH BIOS)
machdir-$(CONFIG_SH_7721_SOLUTION_ENGINE) += se/7721
machdir-$(CONFIG_SH_HP6XX) += hp6xx
machdir-$(CONFIG_SH_DREAMCAST) += dreamcast
-machdir-$(CONFIG_SH_MPC1211) += mpc1211
machdir-$(CONFIG_SH_SH03) += sh03
machdir-$(CONFIG_SH_SECUREEDGE5410) += snapgear
machdir-$(CONFIG_SH_RTS7751R2D) += renesas/rts7751r2d
+++ /dev/null
-#
-# Makefile for the Interface (CTP/PCI/MPC-SH02) specific parts of the kernel
-#
-
-obj-y := setup.o rtc.o
-
-obj-$(CONFIG_PCI) += pci.o
-
+++ /dev/null
-/*
- * Low-Level PCI Support for the MPC-1211(CTP/PCI/MPC-SH02)
- *
- * (c) 2002-2003 Saito.K & Jeanne
- *
- * Dustin McIntire (dustin@sensoria.com)
- * Derived from arch/i386/kernel/pci-*.c which bore the message:
- * (c) 1999--2000 Martin Mares <mj@ucw.cz>
- *
- * May be copied or modified under the terms of the GNU General Public
- * License. See linux/COPYING for more information.
- *
- */
-#include <linux/types.h>
-#include <linux/kernel.h>
-#include <linux/init.h>
-#include <linux/delay.h>
-#include <linux/pci.h>
-#include <linux/sched.h>
-#include <linux/ioport.h>
-#include <linux/errno.h>
-#include <linux/irq.h>
-#include <linux/interrupt.h>
-
-#include <asm/machvec.h>
-#include <asm/io.h>
-#include <asm/mpc1211/pci.h>
-
-static struct resource mpcpci_io_resource = {
- "MPCPCI IO",
- 0x00000000,
- 0xffffffff,
- IORESOURCE_IO
-};
-
-static struct resource mpcpci_mem_resource = {
- "MPCPCI mem",
- 0x00000000,
- 0xffffffff,
- IORESOURCE_MEM
-};
-
-static struct pci_ops pci_direct_conf1;
-struct pci_channel board_pci_channels[] = {
- {&pci_direct_conf1, &mpcpci_io_resource, &mpcpci_mem_resource, 0, 256},
- {NULL, NULL, NULL, 0, 0},
-};
-
-/*
- * Direct access to PCI hardware...
- */
-
-
-#define CONFIG_CMD(bus, devfn, where) (0x80000000 | (bus->number << 16) | (devfn << 8) | (where & ~3))
-
-/*
- * Functions for accessing PCI configuration space with type 1 accesses
- */
-static int pci_conf1_read(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 *value)
-{
- u32 word;
- unsigned long flags;
-
- /*
- * PCIPDR may only be accessed as 32 bit words,
- * so we must do byte alignment by hand
- */
- local_irq_save(flags);
- writel(CONFIG_CMD(bus,devfn,where), PCIPAR);
- word = readl(PCIPDR);
- local_irq_restore(flags);
-
- switch (size) {
- case 1:
- switch (where & 0x3) {
- case 3:
- *value = (u8)(word >> 24);
- break;
- case 2:
- *value = (u8)(word >> 16);
- break;
- case 1:
- *value = (u8)(word >> 8);
- break;
- default:
- *value = (u8)word;
- break;
- }
- break;
- case 2:
- switch (where & 0x3) {
- case 3:
- *value = (u16)(word >> 24);
- local_irq_save(flags);
- writel(CONFIG_CMD(bus,devfn,(where+1)), PCIPAR);
- word = readl(PCIPDR);
- local_irq_restore(flags);
- *value |= ((word & 0xff) << 8);
- break;
- case 2:
- *value = (u16)(word >> 16);
- break;
- case 1:
- *value = (u16)(word >> 8);
- break;
- default:
- *value = (u16)word;
- break;
- }
- break;
- case 4:
- *value = word;
- break;
- }
- PCIDBG(4,"pci_conf1_read@0x%08x=0x%x\n", CONFIG_CMD(bus,devfn,where),*value);
- return PCIBIOS_SUCCESSFUL;
-}
-
-/*
- * Since MPC-1211 only does 32bit access we'll have to do a read,mask,write operation.
- * We'll allow an odd byte offset, though it should be illegal.
- */
-static int pci_conf1_write(struct pci_bus *bus, unsigned int devfn, int where, int size, u32 value)
-{
- u32 word,mask = 0;
- unsigned long flags;
- u32 shift = (where & 3) * 8;
-
- if(size == 1) {
- mask = ((1 << 8) - 1) << shift; // create the byte mask
- } else if(size == 2){
- if(shift == 24)
- return PCIBIOS_BAD_REGISTER_NUMBER;
- mask = ((1 << 16) - 1) << shift; // create the word mask
- }
- local_irq_save(flags);
- writel(CONFIG_CMD(bus,devfn,where), PCIPAR);
- if(size == 4){
- writel(value, PCIPDR);
- local_irq_restore(flags);
- PCIDBG(4,"pci_conf1_write@0x%08x=0x%x\n", CONFIG_CMD(bus,devfn,where),value);
- return PCIBIOS_SUCCESSFUL;
- }
- word = readl(PCIPDR);
- word &= ~mask;
- word |= ((value << shift) & mask);
- writel(word, PCIPDR);
- local_irq_restore(flags);
- PCIDBG(4,"pci_conf1_write@0x%08x=0x%x\n", CONFIG_CMD(bus,devfn,where),word);
- return PCIBIOS_SUCCESSFUL;
-}
-
-#undef CONFIG_CMD
-
-static struct pci_ops pci_direct_conf1 = {
- .read = pci_conf1_read,
- .write = pci_conf1_write,
-};
-
-static void __devinit quirk_ali_ide_ports(struct pci_dev *dev)
-{
- dev->resource[0].start = 0x1f0;
- dev->resource[0].end = 0x1f7;
- dev->resource[0].flags = IORESOURCE_IO;
- dev->resource[1].start = 0x3f6;
- dev->resource[1].end = 0x3f6;
- dev->resource[1].flags = IORESOURCE_IO;
- dev->resource[2].start = 0x170;
- dev->resource[2].end = 0x177;
- dev->resource[2].flags = IORESOURCE_IO;
- dev->resource[3].start = 0x376;
- dev->resource[3].end = 0x376;
- dev->resource[3].flags = IORESOURCE_IO;
- dev->resource[4].start = 0xf000;
- dev->resource[4].end = 0xf00f;
- dev->resource[4].flags = IORESOURCE_IO;
-}
-DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_AL, PCI_DEVICE_ID_AL_M5229, quirk_ali_ide_ports);
-
-char * __devinit pcibios_setup(char *str)
-{
- return str;
-}
-
-/*
- * Called after each bus is probed, but before its children
- * are examined.
- */
-
-void __devinit pcibios_fixup_bus(struct pci_bus *b)
-{
- pci_read_bridge_bases(b);
-}
-
-/*
- * IRQ functions
- */
-static inline u8 bridge_swizzle(u8 pin, u8 slot)
-{
- return (((pin-1) + slot) % 4) + 1;
-}
-
-static inline u8 bridge_swizzle_pci_1(u8 pin, u8 slot)
-{
- return (((pin-1) - slot) & 3) + 1;
-}
-
-static u8 __init mpc1211_swizzle(struct pci_dev *dev, u8 *pinp)
-{
- unsigned long flags;
- u8 pin = *pinp;
- u32 word;
-
- for ( ; dev->bus->self; dev = dev->bus->self) {
- if (!pin)
- continue;
-
- if (dev->bus->number == 1) {
- local_irq_save(flags);
- writel(0x80000000 | 0x2c, PCIPAR);
- word = readl(PCIPDR);
- local_irq_restore(flags);
- word >>= 16;
-
- if (word == 0x0001)
- pin = bridge_swizzle_pci_1(pin, PCI_SLOT(dev->devfn));
- else
- pin = bridge_swizzle(pin, PCI_SLOT(dev->devfn));
- } else
- pin = bridge_swizzle(pin, PCI_SLOT(dev->devfn));
- }
-
- *pinp = pin;
-
- return PCI_SLOT(dev->devfn);
-}
-
-static int __init map_mpc1211_irq(struct pci_dev *dev, u8 slot, u8 pin)
-{
- int irq = -1;
-
- /* now lookup the actual IRQ on a platform specific basis (pci-'platform'.c) */
- if (dev->bus->number == 0) {
- switch (slot) {
- case 13: irq = 9; break; /* USB */
- case 22: irq = 10; break; /* LAN */
- default: irq = 0; break;
- }
- } else {
- switch (pin) {
- case 0: irq = 0; break;
- case 1: irq = 7; break;
- case 2: irq = 9; break;
- case 3: irq = 10; break;
- case 4: irq = 11; break;
- }
- }
-
- if( irq < 0 ) {
- PCIDBG(3, "PCI: Error mapping IRQ on device %s\n", pci_name(dev));
- return irq;
- }
-
- PCIDBG(2, "Setting IRQ for slot %s to %d\n", pci_name(dev), irq);
-
- return irq;
-}
-
-void __init pcibios_fixup_irqs(void)
-{
- pci_fixup_irqs(mpc1211_swizzle, map_mpc1211_irq);
-}
-
-void pcibios_align_resource(void *data, struct resource *res,
- resource_size_t size, resource_size_t align)
-{
- resource_size_t start = res->start;
-
- if (res->flags & IORESOURCE_IO) {
- if (start >= 0x10000UL) {
- if ((start & 0xffffUL) < 0x4000UL) {
- start = (start & 0xffff0000UL) + 0x4000UL;
- } else if ((start & 0xffffUL) >= 0xf000UL) {
- start = (start & 0xffff0000UL) + 0x10000UL;
- }
- res->start = start;
- } else {
- if (start & 0x300) {
- start = (start + 0x3ff) & ~0x3ff;
- res->start = start;
- }
- }
- }
-}
-
+++ /dev/null
-/*
- * linux/arch/sh/kernel/rtc-mpc1211.c -- MPC-1211 on-chip RTC support
- *
- * Copyright (C) 2002 Saito.K & Jeanne
- *
- */
-
-#include <linux/init.h>
-#include <linux/kernel.h>
-#include <linux/sched.h>
-#include <linux/time.h>
-#include <linux/bcd.h>
-#include <linux/mc146818rtc.h>
-
-unsigned long get_cmos_time(void)
-{
- unsigned int year, mon, day, hour, min, sec;
-
- spin_lock(&rtc_lock);
-
- do {
- sec = CMOS_READ(RTC_SECONDS);
- min = CMOS_READ(RTC_MINUTES);
- hour = CMOS_READ(RTC_HOURS);
- day = CMOS_READ(RTC_DAY_OF_MONTH);
- mon = CMOS_READ(RTC_MONTH);
- year = CMOS_READ(RTC_YEAR);
- } while (sec != CMOS_READ(RTC_SECONDS));
-
- if (!(CMOS_READ(RTC_CONTROL) & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
- BCD_TO_BIN(sec);
- BCD_TO_BIN(min);
- BCD_TO_BIN(hour);
- BCD_TO_BIN(day);
- BCD_TO_BIN(mon);
- BCD_TO_BIN(year);
- }
-
- spin_unlock(&rtc_lock);
-
- year += 1900;
- if (year < 1970)
- year += 100;
-
- return mktime(year, mon, day, hour, min, sec);
-}
-
-void mpc1211_rtc_gettimeofday(struct timeval *tv)
-{
-
- tv->tv_sec = get_cmos_time();
- tv->tv_usec = 0;
-}
-
-/* arc/i386/kernel/time.c */
-/*
- * In order to set the CMOS clock precisely, set_rtc_mmss has to be
- * called 500 ms after the second nowtime has started, because when
- * nowtime is written into the registers of the CMOS clock, it will
- * jump to the next second precisely 500 ms later. Check the Motorola
- * MC146818A or Dallas DS12887 data sheet for details.
- *
- * BUG: This routine does not handle hour overflow properly; it just
- * sets the minutes. Usually you'll only notice that after reboot!
- */
-static int set_rtc_mmss(unsigned long nowtime)
-{
- int retval = 0;
- int real_seconds, real_minutes, cmos_minutes;
- unsigned char save_control, save_freq_select;
-
- /* gets recalled with irq locally disabled */
- spin_lock(&rtc_lock);
- save_control = CMOS_READ(RTC_CONTROL); /* tell the clock it's being set */
- CMOS_WRITE((save_control|RTC_SET), RTC_CONTROL);
-
- save_freq_select = CMOS_READ(RTC_FREQ_SELECT); /* stop and reset prescaler */
- CMOS_WRITE((save_freq_select|RTC_DIV_RESET2), RTC_FREQ_SELECT);
-
- cmos_minutes = CMOS_READ(RTC_MINUTES);
- if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD)
- BCD_TO_BIN(cmos_minutes);
-
- /*
- * since we're only adjusting minutes and seconds,
- * don't interfere with hour overflow. This avoids
- * messing with unknown time zones but requires your
- * RTC not to be off by more than 15 minutes
- */
- real_seconds = nowtime % 60;
- real_minutes = nowtime / 60;
- if (((abs(real_minutes - cmos_minutes) + 15)/30) & 1)
- real_minutes += 30; /* correct for half hour time zone */
- real_minutes %= 60;
-
- if (abs(real_minutes - cmos_minutes) < 30) {
- if (!(save_control & RTC_DM_BINARY) || RTC_ALWAYS_BCD) {
- BIN_TO_BCD(real_seconds);
- BIN_TO_BCD(real_minutes);
- }
- CMOS_WRITE(real_seconds,RTC_SECONDS);
- CMOS_WRITE(real_minutes,RTC_MINUTES);
- } else {
- printk(KERN_WARNING
- "set_rtc_mmss: can't update from %d to %d\n",
- cmos_minutes, real_minutes);
- retval = -1;
- }
-
- /* The following flags have to be released exactly in this order,
- * otherwise the DS12887 (popular MC146818A clone with integrated
- * battery and quartz) will not reset the oscillator and will not
- * update precisely 500 ms later. You won't find this mentioned in
- * the Dallas Semiconductor data sheets, but who believes data
- * sheets anyway ... -- Markus Kuhn
- */
- CMOS_WRITE(save_control, RTC_CONTROL);
- CMOS_WRITE(save_freq_select, RTC_FREQ_SELECT);
- spin_unlock(&rtc_lock);
-
- return retval;
-}
-
-int mpc1211_rtc_settimeofday(const struct timeval *tv)
-{
- unsigned long nowtime = tv->tv_sec;
-
- return set_rtc_mmss(nowtime);
-}
-
-void mpc1211_time_init(void)
-{
- rtc_sh_get_time = mpc1211_rtc_gettimeofday;
- rtc_sh_set_time = mpc1211_rtc_settimeofday;
-}
-
+++ /dev/null
-/*
- * linux/arch/sh/boards/mpc1211/setup.c
- *
- * Copyright (C) 2002 Saito.K & Jeanne, Fujii.Y
- *
- */
-
-#include <linux/init.h>
-#include <linux/irq.h>
-#include <linux/hdreg.h>
-#include <linux/ide.h>
-#include <linux/interrupt.h>
-#include <linux/platform_device.h>
-#include <asm/io.h>
-#include <asm/machvec.h>
-#include <asm/mpc1211/mpc1211.h>
-#include <asm/mpc1211/pci.h>
-#include <asm/mpc1211/m1543c.h>
-
-/* ALI15X3 SMBus address offsets */
-#define SMBHSTSTS (0 + 0x3100)
-#define SMBHSTCNT (1 + 0x3100)
-#define SMBHSTSTART (2 + 0x3100)
-#define SMBHSTCMD (7 + 0x3100)
-#define SMBHSTADD (3 + 0x3100)
-#define SMBHSTDAT0 (4 + 0x3100)
-#define SMBHSTDAT1 (5 + 0x3100)
-#define SMBBLKDAT (6 + 0x3100)
-
-/* Other settings */
-#define MAX_TIMEOUT 500 /* times 1/100 sec */
-
-/* ALI15X3 command constants */
-#define ALI15X3_ABORT 0x04
-#define ALI15X3_T_OUT 0x08
-#define ALI15X3_QUICK 0x00
-#define ALI15X3_BYTE 0x10
-#define ALI15X3_BYTE_DATA 0x20
-#define ALI15X3_WORD_DATA 0x30
-#define ALI15X3_BLOCK_DATA 0x40
-#define ALI15X3_BLOCK_CLR 0x80
-
-/* ALI15X3 status register bits */
-#define ALI15X3_STS_IDLE 0x04
-#define ALI15X3_STS_BUSY 0x08
-#define ALI15X3_STS_DONE 0x10
-#define ALI15X3_STS_DEV 0x20 /* device error */
-#define ALI15X3_STS_COLL 0x40 /* collision or no response */
-#define ALI15X3_STS_TERM 0x80 /* terminated by abort */
-#define ALI15X3_STS_ERR 0xE0 /* all the bad error bits */
-
-static void __init pci_write_config(unsigned long busNo,
- unsigned long devNo,
- unsigned long fncNo,
- unsigned long cnfAdd,
- unsigned long cnfData)
-{
- ctrl_outl((0x80000000
- + ((busNo & 0xff) << 16)
- + ((devNo & 0x1f) << 11)
- + ((fncNo & 0x07) << 8)
- + (cnfAdd & 0xfc)), PCIPAR);
-
- ctrl_outl(cnfData, PCIPDR);
-}
-
-/*
- Initialize IRQ setting
-*/
-
-static unsigned char m_irq_mask = 0xfb;
-static unsigned char s_irq_mask = 0xff;
-
-static void disable_mpc1211_irq(unsigned int irq)
-{
- if( irq < 8) {
- m_irq_mask |= (1 << irq);
- outb(m_irq_mask,I8259_M_MR);
- } else {
- s_irq_mask |= (1 << (irq - 8));
- outb(s_irq_mask,I8259_S_MR);
- }
-
-}
-
-static void enable_mpc1211_irq(unsigned int irq)
-{
- if( irq < 8) {
- m_irq_mask &= ~(1 << irq);
- outb(m_irq_mask,I8259_M_MR);
- } else {
- s_irq_mask &= ~(1 << (irq - 8));
- outb(s_irq_mask,I8259_S_MR);
- }
-}
-
-static inline int mpc1211_irq_real(unsigned int irq)
-{
- int value;
- int irqmask;
-
- if ( irq < 8) {
- irqmask = 1<<irq;
- outb(0x0b,I8259_M_CR); /* ISR register */
- value = inb(I8259_M_CR) & irqmask;
- outb(0x0a,I8259_M_CR); /* back ro the IPR reg */
- return value;
- }
- irqmask = 1<<(irq - 8);
- outb(0x0b,I8259_S_CR); /* ISR register */
- value = inb(I8259_S_CR) & irqmask;
- outb(0x0a,I8259_S_CR); /* back ro the IPR reg */
- return value;
-}
-
-static void mask_and_ack_mpc1211(unsigned int irq)
-{
- if(irq < 8) {
- if(m_irq_mask & (1<<irq)){
- if(!mpc1211_irq_real(irq)){
- atomic_inc(&irq_err_count)
- printk("spurious 8259A interrupt: IRQ %x\n",irq);
- }
- } else {
- m_irq_mask |= (1<<irq);
- }
- inb(I8259_M_MR); /* DUMMY */
- outb(m_irq_mask,I8259_M_MR); /* disable */
- outb(0x60+irq,I8259_M_CR); /* EOI */
-
- } else {
- if(s_irq_mask & (1<<(irq - 8))){
- if(!mpc1211_irq_real(irq)){
- atomic_inc(&irq_err_count);
- printk("spurious 8259A interrupt: IRQ %x\n",irq);
- }
- } else {
- s_irq_mask |= (1<<(irq - 8));
- }
- inb(I8259_S_MR); /* DUMMY */
- outb(s_irq_mask,I8259_S_MR); /* disable */
- outb(0x60+(irq-8),I8259_S_CR); /* EOI */
- outb(0x60+2,I8259_M_CR);
- }
-}
-
-static void end_mpc1211_irq(unsigned int irq)
-{
- enable_mpc1211_irq(irq);
-}
-
-static unsigned int startup_mpc1211_irq(unsigned int irq)
-{
- enable_mpc1211_irq(irq);
- return 0;
-}
-
-static void shutdown_mpc1211_irq(unsigned int irq)
-{
- disable_mpc1211_irq(irq);
-}
-
-static struct hw_interrupt_type mpc1211_irq_type = {
- .typename = "MPC1211-IRQ",
- .startup = startup_mpc1211_irq,
- .shutdown = shutdown_mpc1211_irq,
- .enable = enable_mpc1211_irq,
- .disable = disable_mpc1211_irq,
- .ack = mask_and_ack_mpc1211,
- .end = end_mpc1211_irq
-};
-
-static void make_mpc1211_irq(unsigned int irq)
-{
- irq_desc[irq].chip = &mpc1211_irq_type;
- irq_desc[irq].status = IRQ_DISABLED;
- irq_desc[irq].action = 0;
- irq_desc[irq].depth = 1;
- disable_mpc1211_irq(irq);
-}
-
-int mpc1211_irq_demux(int irq)
-{
- unsigned int poll;
-
- if( irq == 2 ) {
- outb(0x0c,I8259_M_CR);
- poll = inb(I8259_M_CR);
- if(poll & 0x80) {
- irq = (poll & 0x07);
- }
- if( irq == 2) {
- outb(0x0c,I8259_S_CR);
- poll = inb(I8259_S_CR);
- irq = (poll & 0x07) + 8;
- }
- }
- return irq;
-}
-
-static void __init init_mpc1211_IRQ(void)
-{
- int i;
- /*
- * Super I/O (Just mimic PC):
- * 1: keyboard
- * 3: serial 1
- * 4: serial 0
- * 5: printer
- * 6: floppy
- * 8: rtc
- * 10: lan
- * 12: mouse
- * 14: ide0
- * 15: ide1
- */
-
- pci_write_config(0,0,0,0x54, 0xb0b0002d);
- outb(0x11, I8259_M_CR); /* mater icw1 edge trigger */
- outb(0x11, I8259_S_CR); /* slave icw1 edge trigger */
- outb(0x20, I8259_M_MR); /* m icw2 base vec 0x08 */
- outb(0x28, I8259_S_MR); /* s icw2 base vec 0x70 */
- outb(0x04, I8259_M_MR); /* m icw3 slave irq2 */
- outb(0x02, I8259_S_MR); /* s icw3 slave id */
- outb(0x01, I8259_M_MR); /* m icw4 non buf normal eoi*/
- outb(0x01, I8259_S_MR); /* s icw4 non buf normal eo1*/
- outb(0xfb, I8259_M_MR); /* disable irq0--irq7 */
- outb(0xff, I8259_S_MR); /* disable irq8--irq15 */
-
- for ( i=0; i < 16; i++) {
- if(i != 2) {
- make_mpc1211_irq(i);
- }
- }
-}
-
-static void delay1000(void)
-{
- int i;
-
- for (i=0; i<1000; i++)
- ctrl_delay();
-}
-
-static int put_smb_blk(unsigned char *p, int address, int command, int no)
-{
- int temp;
- int timeout;
- int i;
-
- outb(0xff, SMBHSTSTS);
- temp = inb(SMBHSTSTS);
- for (timeout = 0; (timeout < MAX_TIMEOUT) && !(temp & ALI15X3_STS_IDLE); timeout++) {
- delay1000();
- temp = inb(SMBHSTSTS);
- }
- if (timeout >= MAX_TIMEOUT){
- return -1;
- }
-
- outb(((address & 0x7f) << 1), SMBHSTADD);
- outb(0xc0, SMBHSTCNT);
- outb(command & 0xff, SMBHSTCMD);
- outb(no & 0x1f, SMBHSTDAT0);
-
- for(i = 1; i <= no; i++) {
- outb(*p++, SMBBLKDAT);
- }
- outb(0xff, SMBHSTSTART);
-
- temp = inb(SMBHSTSTS);
- for (timeout = 0; (timeout < MAX_TIMEOUT) && !(temp & (ALI15X3_STS_ERR | ALI15X3_STS_DONE)); timeout++) {
- delay1000();
- temp = inb(SMBHSTSTS);
- }
- if (timeout >= MAX_TIMEOUT) {
- return -2;
- }
- if ( temp & ALI15X3_STS_ERR ){
- return -3;
- }
- return 0;
-}
-
-static struct resource heartbeat_resources[] = {
- [0] = {
- .start = 0xa2000000,
- .end = 0xa2000000,
- .flags = IORESOURCE_MEM,
- },
-};
-
-static struct platform_device heartbeat_device = {
- .name = "heartbeat",
- .id = -1,
- .num_resources = ARRAY_SIZE(heartbeat_resources),
- .resource = heartbeat_resources,
-};
-
-static struct platform_device *mpc1211_devices[] __initdata = {
- &heartbeat_device,
-};
-
-static int __init mpc1211_devices_setup(void)
-{
- return platform_add_devices(mpc1211_devices,
- ARRAY_SIZE(mpc1211_devices));
-}
-__initcall(mpc1211_devices_setup);
-
-/* arch/sh/boards/mpc1211/rtc.c */
-void mpc1211_time_init(void);
-
-static void __init mpc1211_setup(char **cmdline_p)
-{
- unsigned char spd_buf[128];
-
- __set_io_port_base(PA_PCI_IO);
-
- pci_write_config(0,0,0,0x54, 0xb0b00000);
-
- do {
- outb(ALI15X3_ABORT, SMBHSTCNT);
- spd_buf[0] = 0x0c;
- spd_buf[1] = 0x43;
- spd_buf[2] = 0x7f;
- spd_buf[3] = 0x03;
- spd_buf[4] = 0x00;
- spd_buf[5] = 0x03;
- spd_buf[6] = 0x00;
- } while (put_smb_blk(spd_buf, 0x69, 0, 7) < 0);
-
- board_time_init = mpc1211_time_init;
-
- return 0;
-}
-
-/*
- * The Machine Vector
- */
-static struct sh_machine_vector mv_mpc1211 __initmv = {
- .mv_name = "Interface MPC-1211(CTP/PCI/MPC-SH02)",
- .mv_setup = mpc1211_setup,
- .mv_nr_irqs = 48,
- .mv_irq_demux = mpc1211_irq_demux,
- .mv_init_irq = init_mpc1211_IRQ,
-};
#include <linux/mtd/physmap.h>
#include <linux/mtd/nand.h>
#include <linux/i2c.h>
+#include <linux/smc91x.h>
#include <asm/machvec.h>
#include <asm/io.h>
#include <asm/sh_keysc.h>
* 0x18000000 8GB 8 NAND Flash (K9K8G08U0A)
*/
+static struct smc91x_platdata smc91x_info = {
+ .flags = SMC91X_USE_16BIT,
+ .irq_flags = IRQF_TRIGGER_HIGH,
+};
+
static struct resource smc91x_eth_resources[] = {
[0] = {
.name = "SMC91C111" ,
},
[1] = {
.start = 32, /* IRQ0 */
- .flags = IORESOURCE_IRQ | IRQF_TRIGGER_HIGH,
+ .flags = IORESOURCE_IRQ,
},
};
.name = "smc91x",
.num_resources = ARRAY_SIZE(smc91x_eth_resources),
.resource = smc91x_eth_resources,
+ .dev = {
+ .platform_data = &smc91x_info,
+ },
};
static struct sh_keysc_info sh_keysc_info = {
static DECLARE_INTC_DESC(intc_desc, "r7780mp", vectors,
NULL, mask_registers, NULL, NULL);
-unsigned char * __init highlander_init_irq_r7780mp(void)
+unsigned char * __init highlander_plat_irq_setup(void)
{
if ((ctrl_inw(0xa4000700) & 0xf000) == 0x2000) {
printk(KERN_INFO "Using r7780mp interrupt controller.\n");
static DECLARE_INTC_DESC(intc_desc, "r7780rp", vectors,
NULL, mask_registers, NULL, NULL);
-unsigned char * __init highlander_init_irq_r7780rp(void)
+unsigned char * __init highlander_plat_irq_setup(void)
{
if (ctrl_inw(0xa5000600)) {
printk(KERN_INFO "Using r7780rp interrupt controller.\n");
static DECLARE_INTC_DESC(intc_desc, "r7785rp", vectors,
NULL, mask_registers, NULL, NULL);
-unsigned char * __init highlander_init_irq_r7785rp(void)
+unsigned char * __init highlander_plat_irq_setup(void)
{
if ((ctrl_inw(0xa4000158) & 0xf000) != 0x1000)
return NULL;
static unsigned char irl2irq[HL_NR_IRL];
-int highlander_irq_demux(int irq)
+static int highlander_irq_demux(int irq)
{
if (irq >= HL_NR_IRL || !irl2irq[irq])
return irq;
return irl2irq[irq];
}
-void __init highlander_init_irq(void)
+static void __init highlander_init_irq(void)
{
- unsigned char *ucp = NULL;
-
- do {
-#ifdef CONFIG_SH_R7780MP
- ucp = highlander_init_irq_r7780mp();
- if (ucp)
- break;
-#endif
-#ifdef CONFIG_SH_R7785RP
- ucp = highlander_init_irq_r7785rp();
- if (ucp)
- break;
-#endif
-#ifdef CONFIG_SH_R7780RP
- ucp = highlander_init_irq_r7780rp();
- if (ucp)
- break;
-#endif
- } while (0);
+ unsigned char *ucp = highlander_plat_irq_setup();
if (ucp) {
plat_irq_setup_pins(IRQ_MODE_IRL3210);
.resource = heartbeat_resources,
};
-#ifdef CONFIG_MFD_SM501
static struct plat_serial8250_port uart_platform_data[] = {
{
.membase = (void __iomem *)0xb3e30000,
.resource = sm501_resources,
};
-#endif /* CONFIG_MFD_SM501 */
-
static struct platform_device *rts7751r2d_devices[] __initdata = {
-#ifdef CONFIG_MFD_SM501
&uart_device,
&sm501_device,
-#endif
&heartbeat_device,
&spi_sh_sci_device,
};
{
if (register_trapped_io(&cf_trapped_io) == 0)
platform_device_register(&cf_ide_device);
+
spi_register_board_info(spi_bus, ARRAY_SIZE(spi_bus));
+
return platform_add_devices(rts7751r2d_devices,
ARRAY_SIZE(rts7751r2d_devices));
}
* linux/arch/sh/boards/se/7206/setup.c
*
* Copyright (C) 2006 Yoshinori Sato
- * Copyright (C) 2007 Paul Mundt
+ * Copyright (C) 2007 - 2008 Paul Mundt
*
* Hitachi 7206 SolutionEngine Support.
*/
#include <linux/init.h>
#include <linux/platform_device.h>
+#include <linux/smc91x.h>
#include <asm/se7206.h>
#include <asm/io.h>
#include <asm/machvec.h>
static struct resource smc91x_resources[] = {
[0] = {
- .start = 0x300,
- .end = 0x300 + 0x020 - 1,
+ .name = "smc91x-regs",
+ .start = PA_SMSC + 0x300,
+ .end = PA_SMSC + 0x300 + 0x020 - 1,
.flags = IORESOURCE_MEM,
},
[1] = {
},
};
+static struct smc91x_platdata smc91x_info = {
+ .flags = SMC91X_USE_16BIT,
+};
+
static struct platform_device smc91x_device = {
.name = "smc91x",
.id = -1,
+ .dev = {
+ .dma_mask = NULL,
+ .coherent_dma_mask = 0xffffffff,
+ .platform_data = &smc91x_info,
+ },
.num_resources = ARRAY_SIZE(smc91x_resources),
.resource = smc91x_resources,
};
#include <linux/platform_device.h>
#include <linux/ata_platform.h>
#include <linux/input.h>
+#include <linux/smc91x.h>
#include <asm/machvec.h>
#include <asm/se7722.h>
#include <asm/io.h>
};
/* SMC91x */
+static struct smc91x_platdata smc91x_info = {
+ .flags = SMC91X_USE_16BIT,
+};
+
static struct resource smc91x_eth_resources[] = {
[0] = {
.name = "smc91x-regs" ,
.dev = {
.dma_mask = NULL, /* don't use dma */
.coherent_dma_mask = 0xffffffff,
+ .platform_data = &smc91x_info,
},
.num_resources = ARRAY_SIZE(smc91x_eth_resources),
.resource = smc91x_eth_resources,
targets := vmlinux vmlinux.bin vmlinux.bin.gz \
head_32.o misc_32.o piggy.o
-EXTRA_AFLAGS := -traditional
OBJECTS = $(obj)/head_32.o $(obj)/misc_32.o
targets := vmlinux vmlinux.bin vmlinux.bin.gz \
head_64.o misc_64.o cache.o piggy.o
-EXTRA_AFLAGS := -traditional
OBJECTS := $(obj)/vmlinux_64.lds $(obj)/head_64.o $(obj)/misc_64.o \
$(obj)/cache.o
void __init plat_irq_setup(void)
{
- unsigned long long __dummy0, __dummy1=~0x00000000100000f0;
+ unsigned long long __dummy0, __dummy1=~0x00000000100000f0;
unsigned long reg;
- unsigned long data;
int i;
intc_virt = onchip_remap(INTC_BASE, 1024, "INTC");
/* Set default: per-line enable/disable, priority driven ack/eoi */
- for (i = 0; i < NR_INTC_IRQS; i++) {
- if (platform_int_priority[i] != NO_PRIORITY) {
- irq_desc[i].chip = &intc_irq_type;
- }
- }
+ for (i = 0; i < NR_INTC_IRQS; i++)
+ irq_desc[i].chip = &intc_irq_type;
/* Disable all interrupts and set all priorities to 0 to avoid trouble */
ctrl_outl( NO_PRIORITY, reg);
- /* Set IRLM */
- /* If all the priorities are set to 'no priority', then
- * assume we are using encoded mode.
- */
- irlm = platform_int_priority[IRQ_IRL0] + platform_int_priority[IRQ_IRL1] + \
- platform_int_priority[IRQ_IRL2] + platform_int_priority[IRQ_IRL3];
-
- if (irlm == NO_PRIORITY) {
- /* IRLM = 0 */
- reg = INTC_ICR_CLEAR;
- i = IRQ_INTA;
- printk("Trying to use encoded IRL0-3. IRLs unsupported.\n");
- } else {
- /* IRLM = 1 */
- reg = INTC_ICR_SET;
- i = IRQ_IRL0;
- }
- ctrl_outl(INTC_ICR_IRLM, reg);
-
- /* Set interrupt priorities according to platform description */
- for (data = 0, reg = INTC_INTPRI_0; i < NR_INTC_IRQS; i++) {
- data |= platform_int_priority[i] << ((i % INTC_INTPRI_PPREG) * 4);
- if ((i % INTC_INTPRI_PPREG) == (INTC_INTPRI_PPREG - 1)) {
- /* Upon the 7th, set Priority Register */
- ctrl_outl(data, reg);
- data = 0;
- reg += 8;
+#ifdef CONFIG_SH_CAYMAN
+ {
+ unsigned long data;
+
+ /* Set IRLM */
+ /* If all the priorities are set to 'no priority', then
+ * assume we are using encoded mode.
+ */
+ irlm = platform_int_priority[IRQ_IRL0] +
+ platform_int_priority[IRQ_IRL1] +
+ platform_int_priority[IRQ_IRL2] +
+ platform_int_priority[IRQ_IRL3];
+ if (irlm == NO_PRIORITY) {
+ /* IRLM = 0 */
+ reg = INTC_ICR_CLEAR;
+ i = IRQ_INTA;
+ printk("Trying to use encoded IRL0-3. IRLs unsupported.\n");
+ } else {
+ /* IRLM = 1 */
+ reg = INTC_ICR_SET;
+ i = IRQ_IRL0;
}
- }
+ ctrl_outl(INTC_ICR_IRLM, reg);
+
+ /* Set interrupt priorities according to platform description */
+ for (data = 0, reg = INTC_INTPRI_0; i < NR_INTC_IRQS; i++) {
+ data |= platform_int_priority[i] <<
+ ((i % INTC_INTPRI_PPREG) * 4);
+ if ((i % INTC_INTPRI_PPREG) == (INTC_INTPRI_PPREG - 1)) {
+ /* Upon the 7th, set Priority Register */
+ ctrl_outl(data, reg);
+ data = 0;
+ reg += 8;
+ }
+ }
+#endif
/*
* And now let interrupts come in.
/*
* Shared interrupt handling code for IPR and INTC2 types of IRQs.
*
- * Copyright (C) 2007 Magnus Damm
+ * Copyright (C) 2007, 2008 Magnus Damm
*
* Based on intc2.c and ipr.c
*
#endif
static unsigned int intc_prio_level[NR_IRQS]; /* for now */
+#ifdef CONFIG_CPU_SH3
+static unsigned long ack_handle[NR_IRQS];
+#endif
static inline struct intc_desc_int *get_intc_desc(unsigned int irq)
{
static void modify_8(unsigned long addr, unsigned long h, unsigned long data)
{
+ unsigned long flags;
+ local_irq_save(flags);
ctrl_outb(set_field(ctrl_inb(addr), data, h), addr);
+ local_irq_restore(flags);
}
static void modify_16(unsigned long addr, unsigned long h, unsigned long data)
{
+ unsigned long flags;
+ local_irq_save(flags);
ctrl_outw(set_field(ctrl_inw(addr), data, h), addr);
+ local_irq_restore(flags);
}
static void modify_32(unsigned long addr, unsigned long h, unsigned long data)
{
+ unsigned long flags;
+ local_irq_save(flags);
ctrl_outl(set_field(ctrl_inl(addr), data, h), addr);
+ local_irq_restore(flags);
}
enum { REG_FN_ERR = 0, REG_FN_WRITE_BASE = 1, REG_FN_MODIFY_BASE = 5 };
}
}
+#ifdef CONFIG_CPU_SH3
+static void intc_mask_ack(unsigned int irq)
+{
+ struct intc_desc_int *d = get_intc_desc(irq);
+ unsigned long handle = ack_handle[irq];
+ unsigned long addr;
+
+ intc_disable(irq);
+
+ /* read register and write zero only to the assocaited bit */
+
+ if (handle) {
+ addr = INTC_REG(d, _INTC_ADDR_D(handle), 0);
+ ctrl_inb(addr);
+ ctrl_outb(0x3f ^ set_field(0, 1, handle), addr);
+ }
+}
+#endif
+
static struct intc_handle_int *intc_find_irq(struct intc_handle_int *hp,
unsigned int nr_hp,
unsigned int irq)
[IRQ_TYPE_EDGE_FALLING] = VALID(0),
[IRQ_TYPE_EDGE_RISING] = VALID(1),
[IRQ_TYPE_LEVEL_LOW] = VALID(2),
+ /* SH7706, SH7707 and SH7709 do not support high level triggered */
+#if !defined(CONFIG_CPU_SUBTYPE_SH7706) && \
+ !defined(CONFIG_CPU_SUBTYPE_SH7707) && \
+ !defined(CONFIG_CPU_SUBTYPE_SH7709)
[IRQ_TYPE_LEVEL_HIGH] = VALID(3),
+#endif
};
static int intc_set_sense(unsigned int irq, unsigned int type)
return 0;
}
+#ifdef CONFIG_CPU_SH3
+static unsigned int __init intc_ack_data(struct intc_desc *desc,
+ struct intc_desc_int *d,
+ intc_enum enum_id)
+{
+ struct intc_mask_reg *mr = desc->ack_regs;
+ unsigned int i, j, fn, mode;
+ unsigned long reg_e, reg_d;
+
+ for (i = 0; mr && enum_id && i < desc->nr_ack_regs; i++) {
+ mr = desc->ack_regs + i;
+
+ for (j = 0; j < ARRAY_SIZE(mr->enum_ids); j++) {
+ if (mr->enum_ids[j] != enum_id)
+ continue;
+
+ fn = REG_FN_MODIFY_BASE;
+ mode = MODE_ENABLE_REG;
+ reg_e = mr->set_reg;
+ reg_d = mr->set_reg;
+
+ fn += (mr->reg_width >> 3) - 1;
+ return _INTC_MK(fn, mode,
+ intc_get_reg(d, reg_e),
+ intc_get_reg(d, reg_d),
+ 1,
+ (mr->reg_width - 1) - j);
+ }
+ }
+
+ return 0;
+}
+#endif
+
static unsigned int __init intc_sense_data(struct intc_desc *desc,
struct intc_desc_int *d,
intc_enum enum_id)
/* irq should be disabled by default */
d->chip.mask(irq);
+
+#ifdef CONFIG_CPU_SH3
+ if (desc->ack_regs)
+ ack_handle[irq] = intc_ack_data(desc, d, enum_id);
+#endif
}
static unsigned int __init save_reg(struct intc_desc_int *d,
d->nr_reg += desc->prio_regs ? desc->nr_prio_regs * 2 : 0;
d->nr_reg += desc->sense_regs ? desc->nr_sense_regs : 0;
+#ifdef CONFIG_CPU_SH3
+ d->nr_reg += desc->ack_regs ? desc->nr_ack_regs : 0;
+#endif
d->reg = alloc_bootmem(d->nr_reg * sizeof(*d->reg));
#ifdef CONFIG_SMP
d->smp = alloc_bootmem(d->nr_reg * sizeof(*d->smp));
}
}
- BUG_ON(k > 256); /* _INTC_ADDR_E() and _INTC_ADDR_D() are 8 bits */
-
d->chip.name = desc->name;
d->chip.mask = intc_disable;
d->chip.unmask = intc_enable;
d->chip.mask_ack = intc_disable;
d->chip.set_type = intc_set_sense;
+#ifdef CONFIG_CPU_SH3
+ if (desc->ack_regs) {
+ for (i = 0; i < desc->nr_ack_regs; i++)
+ k += save_reg(d, k, desc->ack_regs[i].set_reg, 0);
+
+ d->chip.mask_ack = intc_mask_ack;
+ }
+#endif
+
+ BUG_ON(k > 256); /* _INTC_ADDR_E() and _INTC_ADDR_D() are 8 bits */
+
for (i = 0; i < desc->nr_vectors; i++) {
struct intc_vect *vect = desc->vectors + i;
iy = hy & 0x7fffffff;
if (iy < 0x00800000) {
ix = denormal_subf1(ix, iy);
- if (ix < 0) {
+ if ((int) ix < 0) {
ix = -ix;
sign ^= 0x80000000;
}
iy = hy & 0x7fffffffffffffffLL;
if (iy < 0x0010000000000000LL) {
ix = denormal_subd1(ix, iy);
- if (ix < 0) {
+ if ((int) ix < 0) {
ix = -ix;
sign ^= 0x8000000000000000LL;
}
# Makefile for the Linux/SuperH SH-3 backends.
#
-obj-y := ex.o probe.o entry.o
+obj-y := ex.o probe.o entry.o setup-sh3.o
# CPU subtype setup
obj-$(CONFIG_CPU_SUBTYPE_SH7705) += setup-sh7705.o
--- /dev/null
+/*
+ * Shared SH3 Setup code
+ *
+ * Copyright (C) 2008 Magnus Damm
+ *
+ * This file is subject to the terms and conditions of the GNU General Public
+ * License. See the file "COPYING" in the main directory of this archive
+ * for more details.
+ */
+
+#include <linux/init.h>
+#include <linux/irq.h>
+#include <linux/io.h>
+
+/* All SH3 devices are equipped with IRQ0->5 (except sh7708) */
+
+enum {
+ UNUSED = 0,
+
+ /* interrupt sources */
+ IRQ0, IRQ1, IRQ2, IRQ3, IRQ4, IRQ5,
+};
+
+static struct intc_vect vectors_irq0123[] __initdata = {
+ INTC_VECT(IRQ0, 0x600), INTC_VECT(IRQ1, 0x620),
+ INTC_VECT(IRQ2, 0x640), INTC_VECT(IRQ3, 0x660),
+};
+
+static struct intc_vect vectors_irq45[] __initdata = {
+ INTC_VECT(IRQ4, 0x680), INTC_VECT(IRQ5, 0x6a0),
+};
+
+static struct intc_prio_reg prio_registers[] __initdata = {
+ { 0xa4000016, 0, 16, 4, /* IPRC */ { IRQ3, IRQ2, IRQ1, IRQ0 } },
+ { 0xa4000018, 0, 16, 4, /* IPRD */ { 0, 0, IRQ5, IRQ4 } },
+};
+
+static struct intc_mask_reg ack_registers[] __initdata = {
+ { 0xa4000004, 0, 8, /* IRR0 */
+ { 0, 0, IRQ5, IRQ4, IRQ3, IRQ2, IRQ1, IRQ0 } },
+};
+
+static struct intc_sense_reg sense_registers[] __initdata = {
+ { 0xa4000010, 16, 2, { 0, 0, IRQ5, IRQ4, IRQ3, IRQ2, IRQ1, IRQ0 } },
+};
+
+static DECLARE_INTC_DESC_ACK(intc_desc_irq0123, "sh3-irq0123",
+ vectors_irq0123, NULL, NULL,
+ prio_registers, sense_registers, ack_registers);
+
+static DECLARE_INTC_DESC_ACK(intc_desc_irq45, "sh3-irq45",
+ vectors_irq45, NULL, NULL,
+ prio_registers, sense_registers, ack_registers);
+
+#define INTC_ICR1 0xa4000010UL
+#define INTC_ICR1_IRQLVL (1<<14)
+
+void __init plat_irq_setup_pins(int mode)
+{
+ if (mode == IRQ_MODE_IRQ) {
+ ctrl_outw(ctrl_inw(INTC_ICR1) & ~INTC_ICR1_IRQLVL, INTC_ICR1);
+ register_intc_controller(&intc_desc_irq0123);
+ return;
+ }
+ BUG();
+}
+
+void __init plat_irq_setup_sh3(void)
+{
+ register_intc_controller(&intc_desc_irq45);
+}
};
static struct intc_vect vectors[] __initdata = {
- INTC_VECT(IRQ4, 0x680), INTC_VECT(IRQ5, 0x6a0),
+ /* IRQ0->5 are handled in setup-sh3.c */
INTC_VECT(PINT07, 0x700), INTC_VECT(PINT815, 0x720),
INTC_VECT(DMAC_DEI0, 0x800), INTC_VECT(DMAC_DEI1, 0x820),
INTC_VECT(DMAC_DEI2, 0x840), INTC_VECT(DMAC_DEI3, 0x860),
INTC_VECT(ADC_ADI, 0x980),
INTC_VECT(USB_USI0, 0xa20), INTC_VECT(USB_USI1, 0xa40),
INTC_VECT(TPU0, 0xc00), INTC_VECT(TPU1, 0xc20),
- INTC_VECT(TPU3, 0xc80), INTC_VECT(TPU1, 0xca0),
+ INTC_VECT(TPU2, 0xc80), INTC_VECT(TPU3, 0xca0),
INTC_VECT(TMU0, 0x400), INTC_VECT(TMU1, 0x420),
INTC_VECT(TMU2_TUNI, 0x440), INTC_VECT(TMU2_TICPI, 0x460),
INTC_VECT(RTC_ATI, 0x480), INTC_VECT(RTC_PRI, 0x4a0),
static DECLARE_INTC_DESC(intc_desc, "sh7705", vectors, groups,
NULL, prio_registers, NULL);
-static struct intc_vect vectors_irq[] __initdata = {
- INTC_VECT(IRQ0, 0x600), INTC_VECT(IRQ1, 0x620),
- INTC_VECT(IRQ2, 0x640), INTC_VECT(IRQ3, 0x660),
-};
-
-static DECLARE_INTC_DESC(intc_desc_irq, "sh7705-irq", vectors_irq, NULL,
- NULL, prio_registers, NULL);
-
static struct plat_sci_port sci_platform_data[] = {
{
.mapbase = 0xa4410000,
}
__initcall(sh7705_devices_setup);
-void __init plat_irq_setup_pins(int mode)
-{
- if (mode == IRQ_MODE_IRQ) {
- register_intc_controller(&intc_desc_irq);
- return;
- }
- BUG();
-}
-
void __init plat_irq_setup(void)
{
register_intc_controller(&intc_desc);
+ plat_irq_setup_sh3();
}
#if defined(CONFIG_CPU_SUBTYPE_SH7706) || \
defined(CONFIG_CPU_SUBTYPE_SH7707) || \
defined(CONFIG_CPU_SUBTYPE_SH7709)
- INTC_VECT(IRQ4, 0x680), INTC_VECT(IRQ5, 0x6a0),
+ /* IRQ0->5 are handled in setup-sh3.c */
INTC_VECT(DMAC_DEI0, 0x800), INTC_VECT(DMAC_DEI1, 0x820),
INTC_VECT(DMAC_DEI2, 0x840), INTC_VECT(DMAC_DEI3, 0x860),
INTC_VECT(ADC_ADI, 0x980),
static DECLARE_INTC_DESC(intc_desc, "sh770x", vectors, groups,
NULL, prio_registers, NULL);
-#if defined(CONFIG_CPU_SUBTYPE_SH7706) || \
- defined(CONFIG_CPU_SUBTYPE_SH7707) || \
- defined(CONFIG_CPU_SUBTYPE_SH7709)
-static struct intc_vect vectors_irq[] __initdata = {
- INTC_VECT(IRQ0, 0x600), INTC_VECT(IRQ1, 0x620),
- INTC_VECT(IRQ2, 0x640), INTC_VECT(IRQ3, 0x660),
-};
-
-static DECLARE_INTC_DESC(intc_desc_irq, "sh770x-irq", vectors_irq, NULL,
- NULL, prio_registers, NULL);
-#endif
-
static struct resource rtc_resources[] = {
[0] = {
.start = 0xfffffec0,
}
__initcall(sh770x_devices_setup);
-#define INTC_ICR1 0xa4000010UL
-#define INTC_ICR1_IRQLVL (1<<14)
-
-void __init plat_irq_setup_pins(int mode)
+void __init plat_irq_setup(void)
{
- if (mode == IRQ_MODE_IRQ) {
+ register_intc_controller(&intc_desc);
#if defined(CONFIG_CPU_SUBTYPE_SH7706) || \
defined(CONFIG_CPU_SUBTYPE_SH7707) || \
defined(CONFIG_CPU_SUBTYPE_SH7709)
- ctrl_outw(ctrl_inw(INTC_ICR1) & ~INTC_ICR1_IRQLVL, INTC_ICR1);
- register_intc_controller(&intc_desc_irq);
- return;
+ plat_irq_setup_sh3();
#endif
- }
- BUG();
-}
-
-void __init plat_irq_setup(void)
-{
- register_intc_controller(&intc_desc);
}
};
static struct intc_vect vectors[] __initdata = {
- INTC_VECT(IRQ4, 0x680), INTC_VECT(IRQ5, 0x6a0),
+ /* IRQ0->5 are handled in setup-sh3.c */
INTC_VECT(DMAC_DEI0, 0x800), INTC_VECT(DMAC_DEI1, 0x820),
INTC_VECT(DMAC_DEI2, 0x840), INTC_VECT(DMAC_DEI3, 0x860),
INTC_VECT(SCIF0_ERI, 0x880), INTC_VECT(SCIF0_RXI, 0x8a0),
{ 0xa4000016, 0, 16, 4, /* IPRC */ { IRQ3, IRQ2, IRQ1, IRQ0 } },
{ 0xa4000018, 0, 16, 4, /* IPRD */ { 0, 0, IRQ5, IRQ4 } },
{ 0xa400001a, 0, 16, 4, /* IPRE */ { DMAC1, SCIF0, SCIF1 } },
- { 0xa4080000, 0, 16, 4, /* IPRF */ { 0, DMAC2 } },
-#ifdef CONFIG_CPU_SUBTYPE_SH7710
- { 0xa4080000, 0, 16, 4, /* IPRF */ { IPSEC } },
-#endif
+ { 0xa4080000, 0, 16, 4, /* IPRF */ { IPSEC, DMAC2 } },
{ 0xa4080002, 0, 16, 4, /* IPRG */ { EDMAC0, EDMAC1, EDMAC2 } },
{ 0xa4080004, 0, 16, 4, /* IPRH */ { 0, 0, 0, SIOF0 } },
{ 0xa4080006, 0, 16, 4, /* IPRI */ { 0, 0, SIOF1 } },
static DECLARE_INTC_DESC(intc_desc, "sh7710", vectors, groups,
NULL, prio_registers, NULL);
-static struct intc_vect vectors_irq[] __initdata = {
- INTC_VECT(IRQ0, 0x600), INTC_VECT(IRQ1, 0x620),
- INTC_VECT(IRQ2, 0x640), INTC_VECT(IRQ3, 0x660),
-};
-
-static DECLARE_INTC_DESC(intc_desc_irq, "sh7710-irq", vectors_irq, NULL,
- NULL, prio_registers, NULL);
-
static struct resource rtc_resources[] = {
[0] = {
.start = 0xa413fec0,
}
__initcall(sh7710_devices_setup);
-void __init plat_irq_setup_pins(int mode)
-{
- if (mode == IRQ_MODE_IRQ) {
- register_intc_controller(&intc_desc_irq);
- return;
- }
- BUG();
-}
-
void __init plat_irq_setup(void)
{
register_intc_controller(&intc_desc);
+ plat_irq_setup_sh3();
}
#include <linux/serial_sci.h>
#include <asm/rtc.h>
-#define INTC_ICR1 0xA4140010UL
-#define INTC_ICR_IRLM 0x4000
-#define INTC_ICR_IRQ (~INTC_ICR_IRLM)
-
static struct resource rtc_resources[] = {
[0] = {
.start = 0xa413fec0,
};
static struct intc_vect vectors[] __initdata = {
+ /* IRQ0->5 are handled in setup-sh3.c */
INTC_VECT(TMU0, 0x400), INTC_VECT(TMU1, 0x420),
INTC_VECT(TMU2, 0x440), INTC_VECT(RTC_ATI, 0x480),
INTC_VECT(RTC_PRI, 0x4a0), INTC_VECT(RTC_CUI, 0x4c0),
{ 0xA414FEE4UL, 0, 16, 4, /* IPRB */ { WDT, REF_RCMI, SIM, 0 } },
{ 0xA4140016UL, 0, 16, 4, /* IPRC */ { IRQ3, IRQ2, IRQ1, IRQ0 } },
{ 0xA4140018UL, 0, 16, 4, /* IPRD */ { USBF_SPD, TMU_SUNI, IRQ5, IRQ4 } },
-#if defined(CONFIG_CPU_SUBTYPE_SH7720)
{ 0xA414001AUL, 0, 16, 4, /* IPRE */ { DMAC1, 0, LCDC, SSL } },
-#else
- { 0xA414001AUL, 0, 16, 4, /* IPRE */ { DMAC1, 0, LCDC, 0 } },
-#endif
{ 0xA4080000UL, 0, 16, 4, /* IPRF */ { ADC, DMAC2, USBFI, CMT } },
{ 0xA4080002UL, 0, 16, 4, /* IPRG */ { SCIF0, SCIF1, 0, 0 } },
{ 0xA4080004UL, 0, 16, 4, /* IPRH */ { PINT07, PINT815, TPU, IIC } },
static DECLARE_INTC_DESC(intc_desc, "sh7720", vectors, groups,
NULL, prio_registers, NULL);
-static struct intc_sense_reg sense_registers[] __initdata = {
- { INTC_ICR1, 16, 2, { 0, 0, IRQ5, IRQ4, IRQ3, IRQ2, IRQ1, IRQ0 } },
-};
-
-static struct intc_vect vectors_irq[] __initdata = {
- INTC_VECT(IRQ0, 0x600), INTC_VECT(IRQ1, 0x620),
- INTC_VECT(IRQ2, 0x640), INTC_VECT(IRQ3, 0x660),
- INTC_VECT(IRQ4, 0x680), INTC_VECT(IRQ5, 0x6a0),
-};
-
-static DECLARE_INTC_DESC(intc_irq_desc, "sh7720-irq", vectors_irq,
- NULL, NULL, prio_registers, sense_registers);
-
-void __init plat_irq_setup_pins(int mode)
-{
- switch (mode) {
- case IRQ_MODE_IRQ:
- ctrl_outw(ctrl_inw(INTC_ICR1) & INTC_ICR_IRQ, INTC_ICR1);
- register_intc_controller(&intc_irq_desc);
- break;
- default:
- BUG();
- }
-}
-
void __init plat_irq_setup(void)
{
register_intc_controller(&intc_desc);
+ plat_irq_setup_sh3();
}
trap_jtable:
.long do_exception_error /* 0x000 */
.long do_exception_error /* 0x020 */
+#ifdef CONFIG_MMU
.long tlb_miss_load /* 0x040 */
.long tlb_miss_store /* 0x060 */
+#else
+ .long do_exception_error
+ .long do_exception_error
+#endif
! ARTIFICIAL pseudo-EXPEVT setting
.long do_debug_interrupt /* 0x080 */
+#ifdef CONFIG_MMU
.long tlb_miss_load /* 0x0A0 */
.long tlb_miss_store /* 0x0C0 */
+#else
+ .long do_exception_error
+ .long do_exception_error
+#endif
.long do_address_error_load /* 0x0E0 */
.long do_address_error_store /* 0x100 */
#ifdef CONFIG_SH_FPU
.endr
.long do_IRQ /* 0xA00 */
.long do_IRQ /* 0xA20 */
+#ifdef CONFIG_MMU
.long itlb_miss_or_IRQ /* 0xA40 */
+#else
+ .long do_IRQ
+#endif
.long do_IRQ /* 0xA60 */
.long do_IRQ /* 0xA80 */
+#ifdef CONFIG_MMU
.long itlb_miss_or_IRQ /* 0xAA0 */
+#else
+ .long do_IRQ
+#endif
.long do_exception_error /* 0xAC0 */
.long do_address_error_exec /* 0xAE0 */
.rept 8
* Instead of '.space 1024-TEXT_SIZE' place the RESVEC
* block making sure the final alignment is correct.
*/
+#ifdef CONFIG_MMU
tlb_miss:
synco /* TAKum03020 (but probably a good idea anyway.) */
putcon SP, KCR1
getcon KCR1, SP
pta handle_exception, tr0
blink tr0, ZERO
+#else /* CONFIG_MMU */
+ .balign 256
+#endif
/* NB TAKE GREAT CARE HERE TO ENSURE THAT THE INTERRUPT CODE
DOES END UP AT VBR+0x600 */
* fpu_error_or_IRQ? is a helper to deflect to the right cause.
*
*/
+#ifdef CONFIG_MMU
tlb_miss_load:
or SP, ZERO, r2
or ZERO, ZERO, r3 /* Read */
movi do_page_fault, r6
ptabs r6, tr0
blink tr0, ZERO
+#endif /* CONFIG_MMU */
fpu_error_or_IRQA:
pta its_IRQ, tr0
ptabs LINK, tr0
blink tr0, r63
+#ifdef CONFIG_MMU
/*
* --- User Access Handling Section
*/
ptabs LINK, tr0
blink tr0, ZERO
+#endif /* CONFIG_MMU */
/*
* int __strncpy_from_user(unsigned long __dest, unsigned long __src,
.global asm_uaccess_start /* Just a marker */
asm_uaccess_start:
+#ifdef CONFIG_MMU
.long ___copy_user1, ___copy_user_exit
.long ___copy_user2, ___copy_user_exit
.long ___clear_user1, ___clear_user_exit
+#endif
.long ___strncpy_from_user1, ___strncpy_from_user_exit
.long ___strnlen_user1, ___strnlen_user_exit
.long ___get_user_asm_b1, ___get_user_asm_b_exit
#include <linux/string.h>
#include <asm/processor.h>
#include <asm/cache.h>
+#include <asm/tlb.h>
int __init detect_cpu_and_cache_system(void)
{
set_bit(SH_CACHE_MODE_WB, &(boot_cpu_data.dcache.flags));
#endif
+ /* Setup some I/D TLB defaults */
+ sh64_tlb_init();
+
return 0;
}
*/
static void scif_sercon_init(char *s)
{
+ struct uart_port *port = &scif_port;
unsigned baud = DEFAULT_BAUD;
+ unsigned int status;
char *e;
if (*s == ',')
baud = DEFAULT_BAUD;
}
- ctrl_outw(0, scif_port.mapbase + 8);
- ctrl_outw(0, scif_port.mapbase);
+ do {
+ status = sci_in(port, SCxSR);
+ } while (!(status & SCxSR_TEND(port)));
+
+ sci_out(port, SCSCR, 0); /* TE=0, RE=0 */
+ sci_out(port, SCFCR, SCFCR_RFRST | SCFCR_TFRST);
+ sci_out(port, SCSMR, 0);
/* Set baud rate */
- ctrl_outb((CONFIG_SH_PCLK_FREQ + 16 * baud) /
- (32 * baud) - 1, scif_port.mapbase + 4);
-
- ctrl_outw(12, scif_port.mapbase + 24);
- ctrl_outw(8, scif_port.mapbase + 24);
- ctrl_outw(0, scif_port.mapbase + 32);
- ctrl_outw(0x60, scif_port.mapbase + 16);
- ctrl_outw(0, scif_port.mapbase + 36);
- ctrl_outw(0x30, scif_port.mapbase + 8);
+ sci_out(port, SCBRR, (CONFIG_SH_PCLK_FREQ + 16 * baud) /
+ (32 * baud) - 1);
+ udelay((1000000+(baud-1)) / baud); /* Wait one bit interval */
+
+ sci_out(port, SCSPTR, 0);
+ sci_out(port, SCxSR, 0x60);
+ sci_out(port, SCLSR, 0);
+
+ sci_out(port, SCFCR, 0);
+ sci_out(port, SCSCR, 0x30); /* TE=1, RE=1 */
}
#endif /* defined(CONFIG_CPU_SUBTYPE_SH7720) */
#endif /* !defined(CONFIG_SH_STANDARD_BIOS) */
* sh_mv= on the command line, prior to .machvec.init teardown.
*/
struct sh_machine_vector sh_mv = { .mv_name = "generic", };
+EXPORT_SYMBOL(sh_mv);
#ifdef CONFIG_VT
struct screen_info screen_info;
.flags = IORESOURCE_BUSY | IORESOURCE_MEM,
};
+static struct resource bss_resource = {
+ .name = "Kernel bss",
+ .flags = IORESOURCE_BUSY | IORESOURCE_MEM,
+};
+
unsigned long memory_start;
EXPORT_SYMBOL(memory_start);
unsigned long memory_end = 0;
EXPORT_SYMBOL(memory_end);
+static struct resource mem_resources[MAX_NUMNODES];
+
int l1i_cache_shape, l1d_cache_shape, l2_cache_shape;
static int __init early_parse_mem(char *p)
{}
#endif
+void __init __add_active_range(unsigned int nid, unsigned long start_pfn,
+ unsigned long end_pfn)
+{
+ struct resource *res = &mem_resources[nid];
+
+ WARN_ON(res->name); /* max one active range per node for now */
+
+ res->name = "System RAM";
+ res->start = start_pfn << PAGE_SHIFT;
+ res->end = (end_pfn << PAGE_SHIFT) - 1;
+ res->flags = IORESOURCE_MEM | IORESOURCE_BUSY;
+ if (request_resource(&iomem_resource, res)) {
+ pr_err("unable to request memory_resource 0x%lx 0x%lx\n",
+ start_pfn, end_pfn);
+ return;
+ }
+
+ /*
+ * We don't know which RAM region contains kernel data,
+ * so we try it repeatedly and let the resource manager
+ * test it.
+ */
+ request_resource(res, &code_resource);
+ request_resource(res, &data_resource);
+ request_resource(res, &bss_resource);
+
+#ifdef CONFIG_KEXEC
+ if (crashk_res.start != crashk_res.end)
+ request_resource(res, &crashk_res);
+#endif
+
+ add_active_range(nid, start_pfn, end_pfn);
+}
+
void __init setup_bootmem_allocator(unsigned long free_pfn)
{
unsigned long bootmap_size;
bootmap_size = init_bootmem_node(NODE_DATA(0), free_pfn,
min_low_pfn, max_low_pfn);
- add_active_range(0, min_low_pfn, max_low_pfn);
+ __add_active_range(0, min_low_pfn, max_low_pfn);
register_bootmem_low_pages();
node_set_online(0);
code_resource.end = virt_to_phys(_etext)-1;
data_resource.start = virt_to_phys(_etext);
data_resource.end = virt_to_phys(_edata)-1;
+ bss_resource.start = virt_to_phys(__bss_start);
+ bss_resource.end = virt_to_phys(_ebss)-1;
memory_start = (unsigned long)__va(__MEMORY_START);
if (!memory_end)
extern int dump_fpu(struct pt_regs *, elf_fpregset_t *);
extern struct hw_interrupt_type no_irq_type;
-EXPORT_SYMBOL(sh_mv);
-
/* platform dependent support */
EXPORT_SYMBOL(dump_fpu);
EXPORT_SYMBOL(kernel_thread);
#include <linux/in6.h>
#include <linux/interrupt.h>
#include <linux/screen_info.h>
+#include <asm/cacheflush.h>
#include <asm/processor.h>
#include <asm/uaccess.h>
#include <asm/checksum.h>
EXPORT_SYMBOL(dump_fpu);
EXPORT_SYMBOL(kernel_thread);
+#if !defined(CONFIG_CACHE_OFF) && defined(CONFIG_MMU)
+EXPORT_SYMBOL(clear_user_page);
+#endif
+
+#ifndef CONFIG_CACHE_OFF
+EXPORT_SYMBOL(flush_dcache_page);
+#endif
+
/* Networking helper routines. */
+EXPORT_SYMBOL(csum_partial);
EXPORT_SYMBOL(csum_partial_copy_nocheck);
+#ifdef CONFIG_IPV6
+EXPORT_SYMBOL(csum_ipv6_magic);
+#endif
#ifdef CONFIG_VT
EXPORT_SYMBOL(screen_info);
#endif
+EXPORT_SYMBOL(__put_user_asm_b);
+EXPORT_SYMBOL(__put_user_asm_w);
EXPORT_SYMBOL(__put_user_asm_l);
+EXPORT_SYMBOL(__put_user_asm_q);
+EXPORT_SYMBOL(__get_user_asm_b);
+EXPORT_SYMBOL(__get_user_asm_w);
EXPORT_SYMBOL(__get_user_asm_l);
+EXPORT_SYMBOL(__get_user_asm_q);
+EXPORT_SYMBOL(__strnlen_user);
+EXPORT_SYMBOL(__strncpy_from_user);
+EXPORT_SYMBOL(clear_page);
+EXPORT_SYMBOL(__clear_user);
EXPORT_SYMBOL(copy_page);
EXPORT_SYMBOL(__copy_user);
EXPORT_SYMBOL(empty_zero_page);
EXPORT_SYMBOL(memcpy);
EXPORT_SYMBOL(__udelay);
EXPORT_SYMBOL(__ndelay);
+EXPORT_SYMBOL(__const_udelay);
/* Ugh. These come in from libgcc.a at link time. */
#define DECLARE_EXPORT(name) extern void name(void);EXPORT_SYMBOL(name)
DECLARE_EXPORT(__sdivsi3);
+DECLARE_EXPORT(__sdivsi3_2);
DECLARE_EXPORT(__muldi3);
DECLARE_EXPORT(__udivsi3);
+DECLARE_EXPORT(__div_table);
tv->tv_sec = sec;
tv->tv_usec = usec;
}
+EXPORT_SYMBOL(do_gettimeofday);
int do_settimeofday(struct timespec *tv)
{
* the irq version of write_lock because as just said we have irq
* locally disabled. -arca
*/
- write_lock(&xtime_lock);
+ write_seqlock(&xtime_lock);
asm ("getcon cr62, %0" : "=r" (current_ctc));
ctc_last_interrupt = (unsigned long) current_ctc;
/* do it again in 60 s */
last_rtc_update = xtime.tv_sec - 600;
}
- write_unlock(&xtime_lock);
+ write_sequnlock(&xtime_lock);
#ifndef CONFIG_SMP
update_process_times(user_mode(get_irq_regs()));
rr->pc = regs->pc;
if (sp < stack_bottom + 3092) {
- printk("evt_debug : stack underflow report\n");
int i, j;
+ printk("evt_debug : stack underflow report\n");
for (j=0, i = event_ptr; j<16; j++) {
rr = event_ring + i;
printk("evt=%08x event=%08x tra=%08x pid=%5d sp=%08lx pc=%08lx\n",
# Makefile for the Linux SuperH-specific parts of the memory manager.
#
-obj-y := init.o extable_64.o consistent.o
+obj-y := init.o consistent.o
-mmu-y := tlb-nommu.o pg-nommu.o
-mmu-$(CONFIG_MMU) := fault_64.o ioremap_64.o tlbflush_64.o tlb-sh5.o
+mmu-y := tlb-nommu.o pg-nommu.o extable_32.o
+mmu-$(CONFIG_MMU) := fault_64.o ioremap_64.o tlbflush_64.o tlb-sh5.o \
+ extable_64.o
ifndef CONFIG_CACHE_OFF
obj-y += cache-sh5.o
sh64_icache_inv_current_user_range(vaddr, end);
}
+#ifdef CONFIG_MMU
/*
* These *MUST* lie in an area of virtual address space that's otherwise
* unused.
else
sh64_clear_user_page_coloured(to, address);
}
+#endif
return shmedia_alloc_io(phys, size, name);
}
+EXPORT_SYMBOL(onchip_remap);
void onchip_unmap(unsigned long vaddr)
{
kfree(res);
}
}
+EXPORT_SYMBOL(onchip_unmap);
#ifdef CONFIG_PROC_FS
static int
free_pfn = start_pfn = start >> PAGE_SHIFT;
end_pfn = end >> PAGE_SHIFT;
- add_active_range(nid, start_pfn, end_pfn);
+ __add_active_range(nid, start_pfn, end_pfn);
/* Node-local pgdat */
NODE_DATA(nid) = pfn_to_kaddr(free_pfn);
7751SYSTEMH SH_7751_SYSTEMH
HP6XX SH_HP6XX
DREAMCAST SH_DREAMCAST
-MPC1211 SH_MPC1211
SNAPGEAR SH_SECUREEDGE5410
EDOSK7705 SH_EDOSK7705
SH4202_MICRODEV SH_SH4202_MICRODEV
.align 4
.globl linux_sparc_syscall
linux_sparc_syscall:
+ sethi %hi(PSR_SYSCALL), %l4
+ or %l0, %l4, %l0
/* Direct access to user regs, must faster. */
cmp %g1, NR_SYSCALLS
bgeu linux_sparc_ni_syscall
(char __user * __user *)regs->u_regs[base + UREG_I2],
regs);
putname(filename);
- if (error == 0) {
- task_lock(current);
- current->ptrace &= ~PT_DTRACE;
- task_unlock(current);
- }
out:
return error;
}
switch (pos) {
case 32: /* PSR */
psr = regs->psr;
- psr &= ~PSR_ICC;
- psr |= (reg & PSR_ICC);
+ psr &= ~(PSR_ICC | PSR_SYSCALL);
+ psr |= (reg & (PSR_ICC | PSR_SYSCALL));
regs->psr = psr;
break;
case 33: /* PC */
break;
default:
+ if (request == PTRACE_SPARC_DETACH)
+ request = PTRACE_DETACH;
ret = ptrace_request(child, request, addr, data);
break;
}
ret_trap_entry:
ret_trap_lockless_ipi:
andcc %t_psr, PSR_PS, %g0
+ sethi %hi(PSR_SYSCALL), %g1
be 1f
- nop
+ andn %t_psr, %g1, %t_psr
wr %t_psr, 0x0, %psr
b ret_trap_kernel
ld [%sp + STACKFRAME_SZ + PT_PSR], %t_psr
mov %l5, %o1
- mov %l6, %o2
call do_signal
add %sp, STACKFRAME_SZ, %o0 ! pt_regs ptr
ld [%sp + STACKFRAME_SZ + PT_PSR], %t_psr
clr %l6
ret_trap_continue:
+ sethi %hi(PSR_SYSCALL), %g1
+ andn %t_psr, %g1, %t_psr
wr %t_psr, 0x0, %psr
WRITE_PAUSE
LOAD_PT_PRIV(sp, t_psr, t_pc, t_npc)
or %t_pc, %t_npc, %g2
andcc %g2, 0x3, %g0
+ sethi %hi(PSR_SYCALL), %g2
be 1f
- nop
+ andn %t_psr, %g2, %t_psr
b ret_trap_unaligned_pc
add %sp, STACKFRAME_SZ, %o0
1:
LOAD_PT_ALL(sp, t_psr, t_pc, t_npc, g1)
2:
+ sethi %hi(PSR_SYSCALL), %twin_tmp1
+ andn %t_psr, %twin_tmp1, %t_psr
wr %t_psr, 0x0, %psr
WRITE_PAUSE
regs->psr = (up_psr & ~(PSR_ICC | PSR_EF))
| (regs->psr & (PSR_ICC | PSR_EF));
+ /* Prevent syscall restart. */
+ pt_regs_clear_syscall(regs);
+
err |= __get_user(fpu_save, &sf->fpu_save);
if (fpu_save)
regs->psr = (regs->psr & ~PSR_ICC) | (psr & PSR_ICC);
+ /* Prevent syscall restart. */
+ pt_regs_clear_syscall(regs);
+
err |= __get_user(fpu_save, &sf->fpu_save);
if (fpu_save)
static inline void __user *get_sigframe(struct sigaction *sa, struct pt_regs *regs, unsigned long framesize)
{
- unsigned long sp;
+ unsigned long sp = regs->u_regs[UREG_FP];
- sp = regs->u_regs[UREG_FP];
+ /*
+ * If we are on the alternate signal stack and would overflow it, don't.
+ * Return an always-bogus address instead so we will die with SIGSEGV.
+ */
+ if (on_sig_stack(sp) && !likely(on_sig_stack(sp - framesize)))
+ return (void __user *) -1L;
/* This is the X/Open sanctioned signal stack switching. */
if (sa->sa_flags & SA_ONSTACK) {
- if (!on_sig_stack(sp) && !((current->sas_ss_sp + current->sas_ss_size) & 7))
+ if (sas_ss_flags(sp) == 0)
sp = current->sas_ss_sp + current->sas_ss_size;
}
+
+ /* Always align the stack frame. This handles two cases. First,
+ * sigaltstack need not be mindful of platform specific stack
+ * alignment. Second, if we took this signal because the stack
+ * is not aligned properly, we'd like to take the signal cleanly
+ * and report that.
+ */
+ sp &= ~7UL;
+
return (void __user *)(sp - framesize);
}
* want to handle. Thus you cannot kill init even with a SIGKILL even by
* mistake.
*/
-asmlinkage void do_signal(struct pt_regs * regs, unsigned long orig_i0, int restart_syscall)
+asmlinkage void do_signal(struct pt_regs * regs, unsigned long orig_i0)
{
- siginfo_t info;
- struct sparc_deliver_cookie cookie;
struct k_sigaction ka;
- int signr;
+ int restart_syscall;
sigset_t *oldset;
+ siginfo_t info;
+ int signr;
- cookie.restart_syscall = restart_syscall;
- cookie.orig_i0 = orig_i0;
+ if (pt_regs_is_syscall(regs) && (regs->psr & PSR_C))
+ restart_syscall = 1;
+ else
+ restart_syscall = 0;
if (test_thread_flag(TIF_RESTORE_SIGMASK))
oldset = ¤t->saved_sigmask;
else
oldset = ¤t->blocked;
- signr = get_signal_to_deliver(&info, &ka, regs, &cookie);
+ signr = get_signal_to_deliver(&info, &ka, regs, NULL);
+
+ /* If the debugger messes with the program counter, it clears
+ * the software "in syscall" bit, directing us to not perform
+ * a syscall restart.
+ */
+ if (restart_syscall && !pt_regs_is_syscall(regs))
+ restart_syscall = 0;
+
if (signr > 0) {
- if (cookie.restart_syscall)
- syscall_restart(cookie.orig_i0, regs, &ka.sa);
+ if (restart_syscall)
+ syscall_restart(orig_i0, regs, &ka.sa);
handle_signal(signr, &ka, &info, oldset, regs);
/* a signal was successfully delivered; the saved
clear_thread_flag(TIF_RESTORE_SIGMASK);
return;
}
- if (cookie.restart_syscall &&
+ if (restart_syscall &&
(regs->u_regs[UREG_I0] == ERESTARTNOHAND ||
regs->u_regs[UREG_I0] == ERESTARTSYS ||
regs->u_regs[UREG_I0] == ERESTARTNOINTR)) {
/* replay the system call when we are done */
- regs->u_regs[UREG_I0] = cookie.orig_i0;
+ regs->u_regs[UREG_I0] = orig_i0;
regs->pc -= 4;
regs->npc -= 4;
}
- if (cookie.restart_syscall &&
+ if (restart_syscall &&
regs->u_regs[UREG_I0] == ERESTART_RESTARTBLOCK) {
regs->u_regs[UREG_G1] = __NR_restart_syscall;
regs->pc -= 4;
out:
return ret;
}
-
-void ptrace_signal_deliver(struct pt_regs *regs, void *cookie)
-{
- struct sparc_deliver_cookie *cp = cookie;
-
- if (cp->restart_syscall &&
- (regs->u_regs[UREG_I0] == ERESTARTNOHAND ||
- regs->u_regs[UREG_I0] == ERESTARTSYS ||
- regs->u_regs[UREG_I0] == ERESTARTNOINTR)) {
- /* replay the system call when we are done */
- regs->u_regs[UREG_I0] = cp->orig_i0;
- regs->pc -= 4;
- regs->npc -= 4;
- cp->restart_syscall = 0;
- }
-
- if (cp->restart_syscall &&
- regs->u_regs[UREG_I0] == ERESTART_RESTARTBLOCK) {
- regs->u_regs[UREG_G1] = __NR_restart_syscall;
- regs->pc -= 4;
- regs->npc -= 4;
- cp->restart_syscall = 0;
- }
-}
.text
.align 64
- .globl etrap, etrap_irq, etraptl1
+ .globl etrap_syscall, etrap, etrap_irq, etraptl1
etrap: rdpr %pil, %g2
-etrap_irq:
- TRAP_LOAD_THREAD_REG(%g6, %g1)
+etrap_irq: clr %g3
+etrap_syscall: TRAP_LOAD_THREAD_REG(%g6, %g1)
rdpr %tstate, %g1
+ or %g1, %g3, %g1
sllx %g2, 20, %g3
andcc %g1, TSTATE_PRIV, %g0
or %g1, %g3, %g1
32 * sizeof(u64),
33 * sizeof(u64));
if (!ret) {
- /* Only the condition codes can be modified
- * in the %tstate register.
+ /* Only the condition codes and the "in syscall"
+ * state can be modified in the %tstate register.
*/
- tstate &= (TSTATE_ICC | TSTATE_XCC);
- regs->tstate &= ~(TSTATE_ICC | TSTATE_XCC);
+ tstate &= (TSTATE_ICC | TSTATE_XCC | TSTATE_SYSCALL);
+ regs->tstate &= ~(TSTATE_ICC | TSTATE_XCC | TSTATE_SYSCALL);
regs->tstate |= tstate;
}
}
switch (pos) {
case 32: /* PSR */
tstate = regs->tstate;
- tstate &= ~(TSTATE_ICC | TSTATE_XCC);
+ tstate &= ~(TSTATE_ICC | TSTATE_XCC | TSTATE_SYSCALL);
tstate |= psr_to_tstate_icc(reg);
+ if (reg & PSR_SYSCALL)
+ tstate |= TSTATE_SYSCALL;
regs->tstate = tstate;
break;
case 33: /* PC */
break;
default:
+ if (request == PTRACE_SPARC_DETACH)
+ request = PTRACE_DETACH;
ret = compat_ptrace_request(child, request, addr, data);
break;
}
break;
default:
+ if (request == PTRACE_SPARC_DETACH)
+ request = PTRACE_DETACH;
ret = ptrace_request(child, request, addr, data);
break;
}
wr %o3, %g0, %y
wrpr %l4, 0x0, %pil
wrpr %g0, 0x1, %tl
+ andn %l1, TSTATE_SYSCALL, %l1
wrpr %l1, %g0, %tstate
wrpr %l2, %g0, %tpc
wrpr %o2, %g0, %tnpc
regs->tnpc = tnpc;
/* Prevent syscall restart. */
- pt_regs_clear_trap_type(regs);
+ pt_regs_clear_syscall(regs);
sigdelsetmask(&set, ~_BLOCKABLE);
spin_lock_irq(¤t->sighand->siglock);
static inline void __user *get_sigframe(struct k_sigaction *ka, struct pt_regs *regs, unsigned long framesize)
{
- unsigned long sp;
+ unsigned long sp = regs->u_regs[UREG_FP] + STACK_BIAS;
- sp = regs->u_regs[UREG_FP] + STACK_BIAS;
+ /*
+ * If we are on the alternate signal stack and would overflow it, don't.
+ * Return an always-bogus address instead so we will die with SIGSEGV.
+ */
+ if (on_sig_stack(sp) && !likely(on_sig_stack(sp - framesize)))
+ return (void __user *) -1L;
/* This is the X/Open sanctioned signal stack switching. */
if (ka->sa.sa_flags & SA_ONSTACK) {
- if (!on_sig_stack(sp) &&
- !((current->sas_ss_sp + current->sas_ss_size) & 7))
+ if (sas_ss_flags(sp) == 0)
sp = current->sas_ss_sp + current->sas_ss_size;
}
+
+ /* Always align the stack frame. This handles two cases. First,
+ * sigaltstack need not be mindful of platform specific stack
+ * alignment. Second, if we took this signal because the stack
+ * is not aligned properly, we'd like to take the signal cleanly
+ * and report that.
+ */
+ sp &= ~7UL;
+
return (void __user *)(sp - framesize);
}
}
static inline void syscall_restart(unsigned long orig_i0, struct pt_regs *regs,
- struct sigaction *sa)
+ struct sigaction *sa)
{
switch (regs->u_regs[UREG_I0]) {
case ERESTART_RESTARTBLOCK:
*/
static void do_signal(struct pt_regs *regs, unsigned long orig_i0)
{
- struct signal_deliver_cookie cookie;
struct k_sigaction ka;
+ int restart_syscall;
sigset_t *oldset;
siginfo_t info;
int signr;
if (pt_regs_is_syscall(regs) &&
(regs->tstate & (TSTATE_XCARRY | TSTATE_ICARRY))) {
- pt_regs_clear_trap_type(regs);
- cookie.restart_syscall = 1;
+ restart_syscall = 1;
} else
- cookie.restart_syscall = 0;
- cookie.orig_i0 = orig_i0;
+ restart_syscall = 0;
if (test_thread_flag(TIF_RESTORE_SIGMASK))
oldset = ¤t->saved_sigmask;
#ifdef CONFIG_COMPAT
if (test_thread_flag(TIF_32BIT)) {
extern void do_signal32(sigset_t *, struct pt_regs *,
- struct signal_deliver_cookie *);
- do_signal32(oldset, regs, &cookie);
+ int restart_syscall,
+ unsigned long orig_i0);
+ do_signal32(oldset, regs, restart_syscall, orig_i0);
return;
}
#endif
- signr = get_signal_to_deliver(&info, &ka, regs, &cookie);
+ signr = get_signal_to_deliver(&info, &ka, regs, NULL);
+
+ /* If the debugger messes with the program counter, it clears
+ * the software "in syscall" bit, directing us to not perform
+ * a syscall restart.
+ */
+ if (restart_syscall && !pt_regs_is_syscall(regs))
+ restart_syscall = 0;
+
if (signr > 0) {
- if (cookie.restart_syscall)
- syscall_restart(cookie.orig_i0, regs, &ka.sa);
+ if (restart_syscall)
+ syscall_restart(orig_i0, regs, &ka.sa);
handle_signal(signr, &ka, &info, oldset, regs);
/* a signal was successfully delivered; the saved
clear_thread_flag(TIF_RESTORE_SIGMASK);
return;
}
- if (cookie.restart_syscall &&
+ if (restart_syscall &&
(regs->u_regs[UREG_I0] == ERESTARTNOHAND ||
regs->u_regs[UREG_I0] == ERESTARTSYS ||
regs->u_regs[UREG_I0] == ERESTARTNOINTR)) {
/* replay the system call when we are done */
- regs->u_regs[UREG_I0] = cookie.orig_i0;
+ regs->u_regs[UREG_I0] = orig_i0;
regs->tpc -= 4;
regs->tnpc -= 4;
}
- if (cookie.restart_syscall &&
+ if (restart_syscall &&
regs->u_regs[UREG_I0] == ERESTART_RESTARTBLOCK) {
regs->u_regs[UREG_G1] = __NR_restart_syscall;
regs->tpc -= 4;
if (thread_info_flags & (_TIF_SIGPENDING | _TIF_RESTORE_SIGMASK))
do_signal(regs, orig_i0);
}
-
-void ptrace_signal_deliver(struct pt_regs *regs, void *cookie)
-{
- struct signal_deliver_cookie *cp = cookie;
-
- if (cp->restart_syscall &&
- (regs->u_regs[UREG_I0] == ERESTARTNOHAND ||
- regs->u_regs[UREG_I0] == ERESTARTSYS ||
- regs->u_regs[UREG_I0] == ERESTARTNOINTR)) {
- /* replay the system call when we are done */
- regs->u_regs[UREG_I0] = cp->orig_i0;
- regs->tpc -= 4;
- regs->tnpc -= 4;
- cp->restart_syscall = 0;
- }
- if (cp->restart_syscall &&
- regs->u_regs[UREG_I0] == ERESTART_RESTARTBLOCK) {
- regs->u_regs[UREG_G1] = __NR_restart_syscall;
- regs->tpc -= 4;
- regs->tnpc -= 4;
- cp->restart_syscall = 0;
- }
-}
regs->tstate |= psr_to_tstate_icc(psr);
/* Prevent syscall restart. */
- pt_regs_clear_trap_type(regs);
+ pt_regs_clear_syscall(regs);
err |= __get_user(fpu_save, &sf->fpu_save);
if (fpu_save)
regs->tstate |= psr_to_tstate_icc(psr);
/* Prevent syscall restart. */
- pt_regs_clear_trap_type(regs);
+ pt_regs_clear_syscall(regs);
err |= __get_user(fpu_save, &sf->fpu_save);
if (fpu_save)
regs->u_regs[UREG_FP] &= 0x00000000ffffffffUL;
sp = regs->u_regs[UREG_FP];
+ /*
+ * If we are on the alternate signal stack and would overflow it, don't.
+ * Return an always-bogus address instead so we will die with SIGSEGV.
+ */
+ if (on_sig_stack(sp) && !likely(on_sig_stack(sp - framesize)))
+ return (void __user *) -1L;
+
/* This is the X/Open sanctioned signal stack switching. */
if (sa->sa_flags & SA_ONSTACK) {
- if (!on_sig_stack(sp) && !((current->sas_ss_sp + current->sas_ss_size) & 7))
+ if (sas_ss_flags(sp) == 0)
sp = current->sas_ss_sp + current->sas_ss_size;
}
+
+ /* Always align the stack frame. This handles two cases. First,
+ * sigaltstack need not be mindful of platform specific stack
+ * alignment. Second, if we took this signal because the stack
+ * is not aligned properly, we'd like to take the signal cleanly
+ * and report that.
+ */
+ sp &= ~7UL;
+
return (void __user *)(sp - framesize);
}
* mistake.
*/
void do_signal32(sigset_t *oldset, struct pt_regs * regs,
- struct signal_deliver_cookie *cookie)
+ int restart_syscall, unsigned long orig_i0)
{
struct k_sigaction ka;
siginfo_t info;
int signr;
- signr = get_signal_to_deliver(&info, &ka, regs, cookie);
+ signr = get_signal_to_deliver(&info, &ka, regs, NULL);
+
+ /* If the debugger messes with the program counter, it clears
+ * the "in syscall" bit, directing us to not perform a syscall
+ * restart.
+ */
+ if (restart_syscall && !pt_regs_is_syscall(regs))
+ restart_syscall = 0;
+
if (signr > 0) {
- if (cookie->restart_syscall)
- syscall_restart32(cookie->orig_i0, regs, &ka.sa);
+ if (restart_syscall)
+ syscall_restart32(orig_i0, regs, &ka.sa);
handle_signal32(signr, &ka, &info, oldset, regs);
/* a signal was successfully delivered; the saved
clear_thread_flag(TIF_RESTORE_SIGMASK);
return;
}
- if (cookie->restart_syscall &&
+ if (restart_syscall &&
(regs->u_regs[UREG_I0] == ERESTARTNOHAND ||
regs->u_regs[UREG_I0] == ERESTARTSYS ||
regs->u_regs[UREG_I0] == ERESTARTNOINTR)) {
/* replay the system call when we are done */
- regs->u_regs[UREG_I0] = cookie->orig_i0;
+ regs->u_regs[UREG_I0] = orig_i0;
regs->tpc -= 4;
regs->tnpc -= 4;
}
- if (cookie->restart_syscall &&
+ if (restart_syscall &&
regs->u_regs[UREG_I0] == ERESTART_RESTARTBLOCK) {
regs->u_regs[UREG_G1] = __NR_restart_syscall;
regs->tpc -= 4;
line_flush_buffer(tty);
}
-void line_put_char(struct tty_struct *tty, unsigned char ch)
+int line_put_char(struct tty_struct *tty, unsigned char ch)
{
- line_write(tty, &ch, sizeof(ch));
+ return line_write(tty, &ch, sizeof(ch));
}
int line_write(struct tty_struct *tty, const unsigned char *buf, int len)
char *init, char **error_out);
extern int line_write(struct tty_struct *tty, const unsigned char *buf,
int len);
-extern void line_put_char(struct tty_struct *tty, unsigned char ch);
+extern int line_put_char(struct tty_struct *tty, unsigned char ch);
extern void line_set_termios(struct tty_struct *tty, struct ktermios * old);
extern int line_chars_in_buffer(struct tty_struct *tty);
extern void line_flush_buffer(struct tty_struct *tty);
select GENERIC_GPIO
select LEDS_CLASS
select LEDS_GPIO
+ select NEW_LEDS
help
This option is needed for RDC R-321x system-on-chip, also known
as R-8610-(G).
config OLPC
bool "One Laptop Per Child support"
- depends on MGEODE_LX
default n
help
Add support for detecting the unique features of the OLPC
obj-$(CONFIG_KVM_CLOCK) += kvmclock.o
obj-$(CONFIG_PARAVIRT) += paravirt.o paravirt_patch_$(BITS).o
-ifdef CONFIG_INPUT_PCSPKR
-obj-y += pcspeaker.o
-endif
+obj-$(CONFIG_PCSPKR_PLATFORM) += pcspeaker.o
obj-$(CONFIG_SCx200) += scx200.o
scx200-y += scx200_32.o
#include <linux/cpu.h>
+#include <asm/pat.h>
#include <asm/processor.h>
struct cpuid_bit {
set_cpu_cap(c, cb->feature);
}
}
+
+#ifdef CONFIG_X86_PAT
+void __cpuinit validate_pat_support(struct cpuinfo_x86 *c)
+{
+ switch (c->x86_vendor) {
+ case X86_VENDOR_AMD:
+ if (c->x86 >= 0xf && c->x86 <= 0x11)
+ return;
+ break;
+ case X86_VENDOR_INTEL:
+ if (c->x86 == 0xF || (c->x86 == 6 && c->x86_model >= 15))
+ return;
+ break;
+ }
+
+ pat_disable(cpu_has_pat ?
+ "PAT disabled. Not yet verified on this CPU type." :
+ "PAT not supported by CPU.");
+}
+#endif
#include <asm/mmu_context.h>
#include <asm/mtrr.h>
#include <asm/mce.h>
+#include <asm/pat.h>
#ifdef CONFIG_X86_LOCAL_APIC
#include <asm/mpspec.h>
#include <asm/apic.h>
}
- clear_cpu_cap(c, X86_FEATURE_PAT);
-
- switch (c->x86_vendor) {
- case X86_VENDOR_AMD:
- if (c->x86 >= 0xf && c->x86 <= 0x11)
- set_cpu_cap(c, X86_FEATURE_PAT);
- break;
- case X86_VENDOR_INTEL:
- if (c->x86 == 0xF || (c->x86 == 6 && c->x86_model >= 15))
- set_cpu_cap(c, X86_FEATURE_PAT);
- break;
- }
-
}
/*
init_scattered_cpuid_features(c);
}
- clear_cpu_cap(c, X86_FEATURE_PAT);
-
- switch (c->x86_vendor) {
- case X86_VENDOR_AMD:
- if (c->x86 >= 0xf && c->x86 <= 0x11)
- set_cpu_cap(c, X86_FEATURE_PAT);
- break;
- case X86_VENDOR_INTEL:
- if (c->x86 == 0xF || (c->x86 == 6 && c->x86_model >= 15))
- set_cpu_cap(c, X86_FEATURE_PAT);
- break;
- }
}
static void __cpuinit squash_the_stupid_serial_number(struct cpuinfo_x86 *c)
cpu_devs[cvdev->vendor] = cvdev->cpu_dev;
early_cpu_detect();
+ validate_pat_support(&boot_cpu_data);
}
/* Make sure %fs is initialized properly in idle threads */
}
EXPORT_SYMBOL_GPL(geode_gpio_setup_event);
+int geode_has_vsa2(void)
+{
+ static int has_vsa2 = -1;
+
+ if (has_vsa2 == -1) {
+ /*
+ * The VSA has virtual registers that we can query for a
+ * signature.
+ */
+ outw(VSA_VR_UNLOCK, VSA_VRC_INDEX);
+ outw(VSA_VR_SIGNATURE, VSA_VRC_INDEX);
+
+ has_vsa2 = (inw(VSA_VRC_DATA) == VSA_SIG);
+ }
+
+ return has_vsa2;
+}
+EXPORT_SYMBOL_GPL(geode_has_vsa2);
+
static int __init geode_southbridge_init(void)
{
if (!is_geode())
{
struct task_struct *tsk = current;
- clear_fpu(tsk);
return __copy_from_user(&tsk->thread.xstate->fsave, buf,
sizeof(struct i387_fsave_struct));
}
struct user_i387_ia32_struct env;
int err;
- clear_fpu(tsk);
err = __copy_from_user(&tsk->thread.xstate->fxsave, &buf->_fxsr_env[0],
sizeof(struct i387_fxsave_struct));
/* mxcsr reserved bits must be masked to zero for security reasons */
int err;
if (HAVE_HWFP) {
+ struct task_struct *tsk = current;
+
+ clear_fpu(tsk);
+
+ if (!used_math()) {
+ err = init_fpu(tsk);
+ if (err)
+ return err;
+ }
+
if (cpu_has_fxsr)
err = restore_i387_fxsave(buf);
else
/* Copy section for each CPU (we discard the original) */
size = PERCPU_ENOUGH_ROOM;
- printk(KERN_INFO "PERCPU: Allocating %zd bytes of per cpu data\n",
+ printk(KERN_INFO "PERCPU: Allocating %lu bytes of per cpu data\n",
size);
for_each_possible_cpu(i) {
}, {
.name = "keyboard",
.start = 0x0060,
- .end = 0x006f,
+ .end = 0x0060,
+ .flags = IORESOURCE_BUSY | IORESOURCE_IO
+}, {
+ .name = "keyboard",
+ .start = 0x0064,
+ .end = 0x0064,
.flags = IORESOURCE_BUSY | IORESOURCE_IO
}, {
.name = "dma page reg",
#include <asm/ds.h>
#include <asm/topology.h>
#include <asm/trampoline.h>
+#include <asm/pat.h>
#include <mach_apic.h>
#ifdef CONFIG_PARAVIRT
.flags = IORESOURCE_BUSY | IORESOURCE_IO },
{ .name = "timer1", .start = 0x50, .end = 0x53,
.flags = IORESOURCE_BUSY | IORESOURCE_IO },
- { .name = "keyboard", .start = 0x60, .end = 0x6f,
+ { .name = "keyboard", .start = 0x60, .end = 0x60,
+ .flags = IORESOURCE_BUSY | IORESOURCE_IO },
+ { .name = "keyboard", .start = 0x64, .end = 0x64,
.flags = IORESOURCE_BUSY | IORESOURCE_IO },
{ .name = "dma page reg", .start = 0x80, .end = 0x8f,
.flags = IORESOURCE_BUSY | IORESOURCE_IO },
if (c->extended_cpuid_level >= 0x80000007)
c->x86_power = cpuid_edx(0x80000007);
-
- clear_cpu_cap(c, X86_FEATURE_PAT);
-
switch (c->x86_vendor) {
case X86_VENDOR_AMD:
early_init_amd(c);
- if (c->x86 >= 0xf && c->x86 <= 0x11)
- set_cpu_cap(c, X86_FEATURE_PAT);
break;
case X86_VENDOR_INTEL:
early_init_intel(c);
- if (c->x86 == 0xF || (c->x86 == 6 && c->x86_model >= 15))
- set_cpu_cap(c, X86_FEATURE_PAT);
break;
case X86_VENDOR_CENTAUR:
early_init_centaur(c);
break;
}
+ validate_pat_support(c);
}
/*
#include <asm/mtrr.h>
#include <asm/io.h>
-int pat_wc_enabled = 1;
+#ifdef CONFIG_X86_PAT
+int __read_mostly pat_wc_enabled = 1;
-static u64 __read_mostly boot_pat_state;
-
-static int nopat(char *str)
+void __init pat_disable(char *reason)
{
pat_wc_enabled = 0;
- printk(KERN_INFO "x86: PAT support disabled.\n");
-
- return 0;
+ printk(KERN_INFO "%s\n", reason);
}
-early_param("nopat", nopat);
-static int pat_known_cpu(void)
+static int nopat(char *str)
{
- if (!pat_wc_enabled)
- return 0;
-
- if (cpu_has_pat)
- return 1;
-
- pat_wc_enabled = 0;
- printk(KERN_INFO "CPU and/or kernel does not support PAT.\n");
+ pat_disable("PAT support disabled.");
return 0;
}
+early_param("nopat", nopat);
+#endif
+
+static u64 __read_mostly boot_pat_state;
enum {
PAT_UC = 0, /* uncached */
{
u64 pat;
-#ifndef CONFIG_X86_PAT
- nopat(NULL);
-#endif
-
- /* Boot CPU enables PAT based on CPU feature */
- if (!smp_processor_id() && !pat_known_cpu())
+ if (!pat_wc_enabled)
return;
- /* APs enable PAT iff boot CPU has enabled it before */
- if (smp_processor_id() && !pat_wc_enabled)
- return;
+ /* Paranoia check. */
+ if (!cpu_has_pat) {
+ printk(KERN_ERR "PAT enabled, but CPU feature cleared\n");
+ /*
+ * Panic if this happens on the secondary CPU, and we
+ * switched to PAT on the boot CPU. We have no way to
+ * undo PAT.
+ */
+ BUG_ON(boot_pat_state);
+ }
/* Set PWT to Write-Combining. All other bits stay the same */
/*
PAT(4,WB) | PAT(5,WC) | PAT(6,UC_MINUS) | PAT(7,UC);
/* Boot CPU check */
- if (!smp_processor_id()) {
+ if (!boot_pat_state)
rdmsrl(MSR_IA32_CR_PAT, boot_pat_state);
- }
wrmsrl(MSR_IA32_CR_PAT, pat);
printk(KERN_INFO "x86 PAT enabled: cpu %d, old 0x%Lx, new 0x%Lx\n",
*/
DEFINE_SPINLOCK(pci_config_lock);
-static void __devinit pcibios_fixup_device_resources(struct pci_dev *dev)
-{
- struct resource *rom_r = &dev->resource[PCI_ROM_RESOURCE];
-
- if (rom_r->parent)
- return;
- if (rom_r->start)
- /* we deal with BIOS assigned ROM later */
- return;
- if (!(pci_probe & PCI_ASSIGN_ROMS))
- rom_r->start = rom_r->end = rom_r->flags = 0;
-}
-
static int __devinit can_skip_ioresource_align(const struct dmi_system_id *d)
{
pci_probe |= PCI_CAN_SKIP_ISA_ALIGN;
void __devinit pcibios_fixup_bus(struct pci_bus *b)
{
- struct pci_dev *dev;
-
pci_read_bridge_bases(b);
- list_for_each_entry(dev, &b->devices, bus_list)
- pcibios_fixup_device_resources(dev);
}
/*
}
}
-#ifdef CONFIG_NUMA
- for (i = 0; i < BUS_NR; i++) {
- node = mp_bus_to_node[i];
- if (node >= 0)
- printk(KERN_DEBUG "bus: %02x to node: %02x\n", i, node);
- }
-#endif
-
for (i = 0; i < pci_root_num; i++) {
int res_num;
int busnum;
static void drive_stat_acct(struct request *rq, int new_io)
{
+ struct hd_struct *part;
int rw = rq_data_dir(rq);
if (!blk_fs_request(rq) || !rq->rq_disk)
return;
- if (!new_io) {
- __all_stat_inc(rq->rq_disk, merges[rw], rq->sector);
- } else {
- struct hd_struct *part = get_part(rq->rq_disk, rq->sector);
+ part = get_part(rq->rq_disk, rq->sector);
+ if (!new_io)
+ __all_stat_inc(rq->rq_disk, part, merges[rw], rq->sector);
+ else {
disk_round_stats(rq->rq_disk);
rq->rq_disk->in_flight++;
if (part) {
**/
void generic_unplug_device(struct request_queue *q)
{
- spin_lock_irq(q->queue_lock);
- __generic_unplug_device(q);
- spin_unlock_irq(q->queue_lock);
+ if (blk_queue_plugged(q)) {
+ spin_lock_irq(q->queue_lock);
+ __generic_unplug_device(q);
+ spin_unlock_irq(q->queue_lock);
+ }
}
EXPORT_SYMBOL(generic_unplug_device);
}
if (blk_fs_request(req) && req->rq_disk) {
+ struct hd_struct *part = get_part(req->rq_disk, req->sector);
const int rw = rq_data_dir(req);
- all_stat_add(req->rq_disk, sectors[rw],
- nr_bytes >> 9, req->sector);
+ all_stat_add(req->rq_disk, part, sectors[rw],
+ nr_bytes >> 9, req->sector);
}
total_bytes = bio_nbytes = 0;
const int rw = rq_data_dir(req);
struct hd_struct *part = get_part(disk, req->sector);
- __all_stat_inc(disk, ios[rw], req->sector);
- __all_stat_add(disk, ticks[rw], duration, req->sector);
+ __all_stat_inc(disk, part, ios[rw], req->sector);
+ __all_stat_add(disk, part, ticks[rw], duration, req->sector);
disk_round_stats(disk);
disk->in_flight--;
if (part) {
rcu_read_lock();
if (ioc->aic && ioc->aic->dtor)
ioc->aic->dtor(ioc->aic);
- rcu_read_unlock();
cfq_dtor(ioc);
+ rcu_read_unlock();
kmem_cache_free(iocontext_cachep, ioc);
return 1;
static int blk_hw_contig_segment(struct request_queue *q, struct bio *bio,
struct bio *nxt)
{
- if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
+ if (!bio_flagged(bio, BIO_SEG_VALID))
blk_recount_segments(q, bio);
- if (unlikely(!bio_flagged(nxt, BIO_SEG_VALID)))
+ if (!bio_flagged(nxt, BIO_SEG_VALID))
blk_recount_segments(q, nxt);
if (!BIOVEC_VIRT_MERGEABLE(__BVEC_END(bio), __BVEC_START(nxt)) ||
BIOVEC_VIRT_OVERSIZE(bio->bi_hw_back_size + nxt->bi_hw_front_size))
q->last_merge = NULL;
return 0;
}
- if (unlikely(!bio_flagged(req->biotail, BIO_SEG_VALID)))
+ if (!bio_flagged(req->biotail, BIO_SEG_VALID))
blk_recount_segments(q, req->biotail);
- if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
+ if (!bio_flagged(bio, BIO_SEG_VALID))
blk_recount_segments(q, bio);
len = req->biotail->bi_hw_back_size + bio->bi_hw_front_size;
if (BIOVEC_VIRT_MERGEABLE(__BVEC_END(req->biotail), __BVEC_START(bio))
return 0;
}
len = bio->bi_hw_back_size + req->bio->bi_hw_front_size;
- if (unlikely(!bio_flagged(bio, BIO_SEG_VALID)))
+ if (!bio_flagged(bio, BIO_SEG_VALID))
blk_recount_segments(q, bio);
- if (unlikely(!bio_flagged(req->bio, BIO_SEG_VALID)))
+ if (!bio_flagged(req->bio, BIO_SEG_VALID))
blk_recount_segments(q, req->bio);
if (BIOVEC_VIRT_MERGEABLE(__BVEC_END(bio), __BVEC_START(req->bio)) &&
!BIOVEC_VIRT_OVERSIZE(len)) {
unsigned long nm;
ssize_t ret = queue_var_store(&nm, page, count);
+ spin_lock_irq(q->queue_lock);
if (nm)
- set_bit(QUEUE_FLAG_NOMERGES, &q->queue_flags);
+ queue_flag_set(QUEUE_FLAG_NOMERGES, q);
else
- clear_bit(QUEUE_FLAG_NOMERGES, &q->queue_flags);
+ queue_flag_clear(QUEUE_FLAG_NOMERGES, q);
+ spin_unlock_irq(q->queue_lock);
return ret;
}
__blk_free_tags(bqt);
q->queue_tags = NULL;
- queue_flag_clear(QUEUE_FLAG_QUEUED, q);
+ queue_flag_clear_unlocked(QUEUE_FLAG_QUEUED, q);
}
/**
**/
void blk_queue_free_tags(struct request_queue *q)
{
- queue_flag_clear(QUEUE_FLAG_QUEUED, q);
+ queue_flag_clear_unlocked(QUEUE_FLAG_QUEUED, q);
}
EXPORT_SYMBOL(blk_queue_free_tags);
* @q: the request queue for the device
* @depth: the maximum queue depth supported
* @tags: the tag to use
+ *
+ * Queue lock must be held here if the function is called to resize an
+ * existing map.
**/
int blk_queue_init_tags(struct request_queue *q, int depth,
struct blk_queue_tag *tags)
* assign it, all done
*/
q->queue_tags = tags;
- queue_flag_set(QUEUE_FLAG_QUEUED, q);
+ queue_flag_set_unlocked(QUEUE_FLAG_QUEUED, q);
INIT_LIST_HEAD(&q->tag_busy_list);
return 0;
fail:
kmem_cache_free(cfq_pool, cfqq);
}
+static void
+__call_for_each_cic(struct io_context *ioc,
+ void (*func)(struct io_context *, struct cfq_io_context *))
+{
+ struct cfq_io_context *cic;
+ struct hlist_node *n;
+
+ hlist_for_each_entry_rcu(cic, n, &ioc->cic_list, cic_list)
+ func(ioc, cic);
+}
+
/*
* Call func for each cic attached to this ioc.
*/
call_for_each_cic(struct io_context *ioc,
void (*func)(struct io_context *, struct cfq_io_context *))
{
- struct cfq_io_context *cic;
- struct hlist_node *n;
-
rcu_read_lock();
- hlist_for_each_entry_rcu(cic, n, &ioc->cic_list, cic_list)
- func(ioc, cic);
+ __call_for_each_cic(ioc, func);
rcu_read_unlock();
}
* should be ok to iterate over the known list, we will see all cic's
* since no new ones are added.
*/
- call_for_each_cic(ioc, cic_free_func);
+ __call_for_each_cic(ioc, cic_free_func);
}
static void cfq_exit_cfqq(struct cfq_data *cfqd, struct cfq_queue *cfqq)
printk(KERN_ERR "cfq: bad prio %x\n", ioprio_class);
case IOPRIO_CLASS_NONE:
/*
- * no prio set, place us in the middle of the BE classes
+ * no prio set, inherit CPU scheduling settings
*/
cfqq->ioprio = task_nice_ioprio(tsk);
- cfqq->ioprio_class = IOPRIO_CLASS_BE;
+ cfqq->ioprio_class = task_nice_ioclass(tsk);
break;
case IOPRIO_CLASS_RT:
cfqq->ioprio = task_ioprio(ioc);
if (keylen > bs) {
struct hash_desc desc;
struct scatterlist tmp;
+ int tmplen;
int err;
desc.tfm = tfm;
desc.flags = crypto_hash_get_flags(parent);
desc.flags &= CRYPTO_TFM_REQ_MAY_SLEEP;
- sg_init_one(&tmp, inkey, keylen);
- err = crypto_hash_digest(&desc, &tmp, keylen, digest);
+ err = crypto_hash_init(&desc);
+ if (err)
+ return err;
+
+ tmplen = bs * 2 + ds;
+ sg_init_one(&tmp, ipad, tmplen);
+
+ for (; keylen > tmplen; inkey += tmplen, keylen -= tmplen) {
+ memcpy(ipad, inkey, tmplen);
+ err = crypto_hash_update(&desc, &tmp, tmplen);
+ if (err)
+ return err;
+ }
+
+ if (keylen) {
+ memcpy(ipad, inkey, keylen);
+ err = crypto_hash_update(&desc, &tmp, keylen);
+ if (err)
+ return err;
+ }
+
+ err = crypto_hash_final(&desc, digest);
if (err)
return err;
menuconfig ACCESSIBILITY
bool "Accessibility support"
---help---
- Enable a submenu where accessibility items may be enabled.
+ Accessibility handles all special kinds of hardware devices or
+ software adapters which help people with disabilities (e.g.
+ blindness) to use computers.
+
+ That includes braille devices, speech synthesis, keyboard
+ remapping, etc.
+
+ Say Y here to get to see options for accessibility.
+ This option alone does not add any kernel code.
+
+ If you say N, all options in this submenu will be skipped and disabled.
If unsure, say N.
{
unsigned long n_sect = bio->bi_size >> 9;
const int rw = bio_data_dir(bio);
+ struct hd_struct *part;
- all_stat_inc(disk, ios[rw], sector);
- all_stat_add(disk, ticks[rw], duration, sector);
- all_stat_add(disk, sectors[rw], n_sect, sector);
- all_stat_add(disk, io_ticks, duration, sector);
+ part = get_part(disk, sector);
+ all_stat_inc(disk, part, ios[rw], sector);
+ all_stat_add(disk, part, ticks[rw], duration, sector);
+ all_stat_add(disk, part, sectors[rw], n_sect, sector);
+ all_stat_add(disk, part, io_ticks, duration, sector);
}
void
sx_write_channel_byte(port, hi_mask, 0x1f);
break;
default:
- printk(KERN_INFO "sx: Invalid wordsize: %u\n", CFLAG & CSIZE);
+ printk(KERN_INFO "sx: Invalid wordsize: %u\n",
+ (unsigned int)CFLAG & CSIZE);
break;
}
set_bit(TTY_HW_COOK_IN, &port->gs.tty->flags);
}
sx_dprintk(SX_DEBUG_TERMIOS, "iflags: %x(%d) ",
- port->gs.tty->termios->c_iflag, I_OTHER(port->gs.tty));
+ (unsigned int)port->gs.tty->termios->c_iflag,
+ I_OTHER(port->gs.tty));
/* Tell line discipline whether we will do output cooking.
* If OPOST is set and no other output flags are set then we can do output
clear_bit(TTY_HW_COOK_OUT, &port->gs.tty->flags);
}
sx_dprintk(SX_DEBUG_TERMIOS, "oflags: %x(%d)\n",
- port->gs.tty->termios->c_oflag, O_OTHER(port->gs.tty));
+ (unsigned int)port->gs.tty->termios->c_oflag,
+ O_OTHER(port->gs.tty));
/* port->c_dcd = sx_get_CD (port); */
func_exit();
return 0;
tty->winsize.ws_row = vc_cons[currcons].d->vc_rows;
tty->winsize.ws_col = vc_cons[currcons].d->vc_cols;
}
+ if (vc->vc_utf)
+ tty->termios->c_iflag |= IUTF8;
+ else
+ tty->termios->c_iflag &= ~IUTF8;
release_console_sem();
vcs_make_sysfs(tty);
return ret;
console_driver->minor_start = 1;
console_driver->type = TTY_DRIVER_TYPE_CONSOLE;
console_driver->init_termios = tty_std_termios;
+ if (default_utf8)
+ console_driver->init_termios.c_iflag |= IUTF8;
console_driver->flags = TTY_DRIVER_REAL_RAW | TTY_DRIVER_RESET_TERMIOS;
tty_set_operations(console_driver, &con_ops);
if (tty_register_driver(console_driver))
u32 x;
int result = 0;
- if (i2c->irq == 0)
+ if (i2c->irq == NO_IRQ)
{
while (!(readb(i2c->base + MPC_I2C_SR) & CSR_MIF)) {
schedule();
return -ENOMEM;
i2c->irq = platform_get_irq(pdev, 0);
- if (i2c->irq < 0) {
- result = -ENXIO;
- goto fail_get_irq;
- }
+ if (i2c->irq < 0)
+ i2c->irq = NO_IRQ; /* Use polling */
+
i2c->flags = pdata->device_flags;
init_waitqueue_head(&i2c->queue);
goto fail_map;
}
- if (i2c->irq != 0)
+ if (i2c->irq != NO_IRQ)
if ((result = request_irq(i2c->irq, mpc_i2c_isr,
IRQF_SHARED, "i2c-mpc", i2c)) < 0) {
printk(KERN_ERR
return result;
fail_add:
- if (i2c->irq != 0)
+ if (i2c->irq != NO_IRQ)
free_irq(i2c->irq, i2c);
fail_irq:
iounmap(i2c->base);
fail_map:
- fail_get_irq:
kfree(i2c);
return result;
};
i2c_del_adapter(&i2c->adap);
platform_set_drvdata(pdev, NULL);
- if (i2c->irq != 0)
+ if (i2c->irq != NO_IRQ)
free_irq(i2c->irq, i2c);
iounmap(i2c->base);
static int piix4_transaction(void);
static unsigned short piix4_smba;
+static int srvrworks_csb5_delay;
static struct pci_driver piix4_driver;
static struct i2c_adapter piix4_adapter;
-static struct dmi_system_id __devinitdata piix4_dmi_table[] = {
+static struct dmi_system_id __devinitdata piix4_dmi_blacklist[] = {
+ {
+ .ident = "Sapphire AM2RD790",
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "SAPPHIRE Inc."),
+ DMI_MATCH(DMI_BOARD_NAME, "PC-AM2RD790"),
+ },
+ },
+ {
+ .ident = "DFI Lanparty UT 790FX",
+ .matches = {
+ DMI_MATCH(DMI_BOARD_VENDOR, "DFI Inc."),
+ DMI_MATCH(DMI_BOARD_NAME, "LP UT 790FX"),
+ },
+ },
+ { }
+};
+
+/* The IBM entry is in a separate table because we only check it
+ on Intel-based systems */
+static struct dmi_system_id __devinitdata piix4_dmi_ibm[] = {
{
.ident = "IBM",
.matches = { DMI_MATCH(DMI_SYS_VENDOR, "IBM"), },
dev_info(&PIIX4_dev->dev, "Found %s device\n", pci_name(PIIX4_dev));
+ if ((PIIX4_dev->vendor == PCI_VENDOR_ID_SERVERWORKS) &&
+ (PIIX4_dev->device == PCI_DEVICE_ID_SERVERWORKS_CSB5))
+ srvrworks_csb5_delay = 1;
+
+ /* On some motherboards, it was reported that accessing the SMBus
+ caused severe hardware problems */
+ if (dmi_check_system(piix4_dmi_blacklist)) {
+ dev_err(&PIIX4_dev->dev,
+ "Accessing the SMBus on this system is unsafe!\n");
+ return -EPERM;
+ }
+
/* Don't access SMBus on IBM systems which get corrupted eeproms */
- if (dmi_check_system(piix4_dmi_table) &&
+ if (dmi_check_system(piix4_dmi_ibm) &&
PIIX4_dev->vendor == PCI_VENDOR_ID_INTEL) {
dev_err(&PIIX4_dev->dev, "IBM system detected; this module "
"may corrupt your serial eeprom! Refusing to load "
outb_p(inb(SMBHSTCNT) | 0x040, SMBHSTCNT);
/* We will always wait for a fraction of a second! (See PIIX4 docs errata) */
- do {
+ if (srvrworks_csb5_delay) /* Extra delay for SERVERWORKS_CSB5 */
+ msleep(2);
+ else
+ msleep(1);
+
+ while ((timeout++ < MAX_TIMEOUT) &&
+ ((temp = inb_p(SMBHSTSTS)) & 0x01))
msleep(1);
- temp = inb_p(SMBHSTSTS);
- } while ((temp & 0x01) && (timeout++ < MAX_TIMEOUT));
/* If the SMBus is still busy, we give up */
if (timeout >= MAX_TIMEOUT) {
/*
* registering functions to load algorithms at runtime
*/
-int __init i2c_sibyte_add_bus(struct i2c_adapter *i2c_adap, int speed)
+static int __init i2c_sibyte_add_bus(struct i2c_adapter *i2c_adap, int speed)
{
struct i2c_algo_sibyte_data *adap = i2c_adap->algo_data;
- /* register new adapter to i2c module... */
+ /* Register new adapter to i2c module... */
i2c_adap->algo = &i2c_sibyte_algo;
- /* Set the frequency to 100 kHz */
+ /* Set the requested frequency. */
csr_out32(speed, SMB_CSR(adap,R_SMB_FREQ));
csr_out32(0, SMB_CSR(adap,R_SMB_CONTROL));
EXPORT_SYMBOL_GPL(i2c_unregister_device);
+static const struct i2c_device_id dummy_id[] = {
+ { "dummy", 0 },
+ { },
+};
+
static int dummy_probe(struct i2c_client *client,
const struct i2c_device_id *id)
{
.driver.name = "dummy",
.probe = dummy_probe,
.remove = dummy_remove,
+ .id_table = dummy_id,
};
/**
* i2c_new_dummy - return a new i2c device bound to a dummy driver
* @adapter: the adapter managing the device
* @address: seven bit address to be used
- * @type: optional label used for i2c_client.name
* Context: can sleep
*
* This returns an I2C client bound to the "dummy" driver, intended for use
* i2c_unregister_device(); or NULL to indicate an error.
*/
struct i2c_client *
-i2c_new_dummy(struct i2c_adapter *adapter, u16 address, const char *type)
+i2c_new_dummy(struct i2c_adapter *adapter, u16 address)
{
struct i2c_board_info info = {
- .driver_name = "dummy",
- .addr = address,
+ I2C_BOARD_INFO("dummy", address),
};
- if (type)
- strlcpy(info.type, type, sizeof info.type);
return i2c_new_device(adapter, &info);
}
EXPORT_SYMBOL_GPL(i2c_new_dummy);
* caller aquires the ctrl_qp lock before the call
*/
static int cxio_hal_ctrl_qp_write_mem(struct cxio_rdev *rdev_p, u32 addr,
- u32 len, void *data, int completion)
+ u32 len, void *data)
{
u32 i, nr_wqe, copy_len;
u8 *copy_data;
flag = 0;
if (i == (nr_wqe - 1)) {
/* last WQE */
- flag = completion ? T3_COMPLETION_FLAG : 0;
+ flag = T3_COMPLETION_FLAG;
if (len % 32)
utx_len = len / 32 + 1;
else
return 0;
}
-/* IN: stag key, pdid, perm, zbva, to, len, page_size, pbl, and pbl_size
- * OUT: stag index, actual pbl_size, pbl_addr allocated.
+/* IN: stag key, pdid, perm, zbva, to, len, page_size, pbl_size and pbl_addr
+ * OUT: stag index
* TBD: shared memory region support
*/
static int __cxio_tpt_op(struct cxio_rdev *rdev_p, u32 reset_tpt_entry,
u32 *stag, u8 stag_state, u32 pdid,
enum tpt_mem_type type, enum tpt_mem_perm perm,
- u32 zbva, u64 to, u32 len, u8 page_size, __be64 *pbl,
- u32 *pbl_size, u32 *pbl_addr)
+ u32 zbva, u64 to, u32 len, u8 page_size,
+ u32 pbl_size, u32 pbl_addr)
{
int err;
struct tpt_entry tpt;
u32 stag_idx;
u32 wptr;
- int rereg = (*stag != T3_STAG_UNSET);
stag_state = stag_state > 0;
stag_idx = (*stag) >> 8;
PDBG("%s stag_state 0x%0x type 0x%0x pdid 0x%0x, stag_idx 0x%x\n",
__func__, stag_state, type, pdid, stag_idx);
- if (reset_tpt_entry)
- cxio_hal_pblpool_free(rdev_p, *pbl_addr, *pbl_size << 3);
- else if (!rereg) {
- *pbl_addr = cxio_hal_pblpool_alloc(rdev_p, *pbl_size << 3);
- if (!*pbl_addr) {
- return -ENOMEM;
- }
- }
-
mutex_lock(&rdev_p->ctrl_qp.lock);
- /* write PBL first if any - update pbl only if pbl list exist */
- if (pbl) {
-
- PDBG("%s *pdb_addr 0x%x, pbl_base 0x%x, pbl_size %d\n",
- __func__, *pbl_addr, rdev_p->rnic_info.pbl_base,
- *pbl_size);
- err = cxio_hal_ctrl_qp_write_mem(rdev_p,
- (*pbl_addr >> 5),
- (*pbl_size << 3), pbl, 0);
- if (err)
- goto ret;
- }
-
/* write TPT entry */
if (reset_tpt_entry)
memset(&tpt, 0, sizeof(tpt));
V_TPT_ADDR_TYPE((zbva ? TPT_ZBTO : TPT_VATO)) |
V_TPT_PAGE_SIZE(page_size));
tpt.rsvd_pbl_addr = reset_tpt_entry ? 0 :
- cpu_to_be32(V_TPT_PBL_ADDR(PBL_OFF(rdev_p, *pbl_addr)>>3));
+ cpu_to_be32(V_TPT_PBL_ADDR(PBL_OFF(rdev_p, pbl_addr)>>3));
tpt.len = cpu_to_be32(len);
tpt.va_hi = cpu_to_be32((u32) (to >> 32));
tpt.va_low_or_fbo = cpu_to_be32((u32) (to & 0xFFFFFFFFULL));
tpt.rsvd_bind_cnt_or_pstag = 0;
tpt.rsvd_pbl_size = reset_tpt_entry ? 0 :
- cpu_to_be32(V_TPT_PBL_SIZE((*pbl_size) >> 2));
+ cpu_to_be32(V_TPT_PBL_SIZE(pbl_size >> 2));
}
err = cxio_hal_ctrl_qp_write_mem(rdev_p,
stag_idx +
(rdev_p->rnic_info.tpt_base >> 5),
- sizeof(tpt), &tpt, 1);
+ sizeof(tpt), &tpt);
/* release the stag index to free pool */
if (reset_tpt_entry)
cxio_hal_put_stag(rdev_p->rscp, stag_idx);
-ret:
+
wptr = rdev_p->ctrl_qp.wptr;
mutex_unlock(&rdev_p->ctrl_qp.lock);
if (!err)
return err;
}
+int cxio_write_pbl(struct cxio_rdev *rdev_p, __be64 *pbl,
+ u32 pbl_addr, u32 pbl_size)
+{
+ u32 wptr;
+ int err;
+
+ PDBG("%s *pdb_addr 0x%x, pbl_base 0x%x, pbl_size %d\n",
+ __func__, pbl_addr, rdev_p->rnic_info.pbl_base,
+ pbl_size);
+
+ mutex_lock(&rdev_p->ctrl_qp.lock);
+ err = cxio_hal_ctrl_qp_write_mem(rdev_p, pbl_addr >> 5, pbl_size << 3,
+ pbl);
+ wptr = rdev_p->ctrl_qp.wptr;
+ mutex_unlock(&rdev_p->ctrl_qp.lock);
+ if (err)
+ return err;
+
+ if (wait_event_interruptible(rdev_p->ctrl_qp.waitq,
+ SEQ32_GE(rdev_p->ctrl_qp.rptr,
+ wptr)))
+ return -ERESTARTSYS;
+
+ return 0;
+}
+
int cxio_register_phys_mem(struct cxio_rdev *rdev_p, u32 *stag, u32 pdid,
enum tpt_mem_perm perm, u32 zbva, u64 to, u32 len,
- u8 page_size, __be64 *pbl, u32 *pbl_size,
- u32 *pbl_addr)
+ u8 page_size, u32 pbl_size, u32 pbl_addr)
{
*stag = T3_STAG_UNSET;
return __cxio_tpt_op(rdev_p, 0, stag, 1, pdid, TPT_NON_SHARED_MR, perm,
- zbva, to, len, page_size, pbl, pbl_size, pbl_addr);
+ zbva, to, len, page_size, pbl_size, pbl_addr);
}
int cxio_reregister_phys_mem(struct cxio_rdev *rdev_p, u32 *stag, u32 pdid,
enum tpt_mem_perm perm, u32 zbva, u64 to, u32 len,
- u8 page_size, __be64 *pbl, u32 *pbl_size,
- u32 *pbl_addr)
+ u8 page_size, u32 pbl_size, u32 pbl_addr)
{
return __cxio_tpt_op(rdev_p, 0, stag, 1, pdid, TPT_NON_SHARED_MR, perm,
- zbva, to, len, page_size, pbl, pbl_size, pbl_addr);
+ zbva, to, len, page_size, pbl_size, pbl_addr);
}
int cxio_dereg_mem(struct cxio_rdev *rdev_p, u32 stag, u32 pbl_size,
u32 pbl_addr)
{
- return __cxio_tpt_op(rdev_p, 1, &stag, 0, 0, 0, 0, 0, 0ULL, 0, 0, NULL,
- &pbl_size, &pbl_addr);
+ return __cxio_tpt_op(rdev_p, 1, &stag, 0, 0, 0, 0, 0, 0ULL, 0, 0,
+ pbl_size, pbl_addr);
}
int cxio_allocate_window(struct cxio_rdev *rdev_p, u32 * stag, u32 pdid)
{
- u32 pbl_size = 0;
*stag = T3_STAG_UNSET;
return __cxio_tpt_op(rdev_p, 0, stag, 0, pdid, TPT_MW, 0, 0, 0ULL, 0, 0,
- NULL, &pbl_size, NULL);
+ 0, 0);
}
int cxio_deallocate_window(struct cxio_rdev *rdev_p, u32 stag)
{
- return __cxio_tpt_op(rdev_p, 1, &stag, 0, 0, 0, 0, 0, 0ULL, 0, 0, NULL,
- NULL, NULL);
+ return __cxio_tpt_op(rdev_p, 1, &stag, 0, 0, 0, 0, 0, 0ULL, 0, 0,
+ 0, 0);
}
int cxio_rdma_init(struct cxio_rdev *rdev_p, struct t3_rdma_init_attr *attr)
int cxio_destroy_qp(struct cxio_rdev *rdev, struct t3_wq *wq,
struct cxio_ucontext *uctx);
int cxio_peek_cq(struct t3_wq *wr, struct t3_cq *cq, int opcode);
+int cxio_write_pbl(struct cxio_rdev *rdev_p, __be64 *pbl,
+ u32 pbl_addr, u32 pbl_size);
int cxio_register_phys_mem(struct cxio_rdev *rdev, u32 * stag, u32 pdid,
enum tpt_mem_perm perm, u32 zbva, u64 to, u32 len,
- u8 page_size, __be64 *pbl, u32 *pbl_size,
- u32 *pbl_addr);
+ u8 page_size, u32 pbl_size, u32 pbl_addr);
int cxio_reregister_phys_mem(struct cxio_rdev *rdev, u32 * stag, u32 pdid,
enum tpt_mem_perm perm, u32 zbva, u64 to, u32 len,
- u8 page_size, __be64 *pbl, u32 *pbl_size,
- u32 *pbl_addr);
+ u8 page_size, u32 pbl_size, u32 pbl_addr);
int cxio_dereg_mem(struct cxio_rdev *rdev, u32 stag, u32 pbl_size,
u32 pbl_addr);
int cxio_allocate_window(struct cxio_rdev *rdev, u32 * stag, u32 pdid);
*/
#define MIN_PBL_SHIFT 8 /* 256B == min PBL size (32 entries) */
-#define PBL_CHUNK 2*1024*1024
u32 cxio_hal_pblpool_alloc(struct cxio_rdev *rdev_p, int size)
{
int cxio_hal_pblpool_create(struct cxio_rdev *rdev_p)
{
- unsigned long i;
+ unsigned pbl_start, pbl_chunk;
+
rdev_p->pbl_pool = gen_pool_create(MIN_PBL_SHIFT, -1);
- if (rdev_p->pbl_pool)
- for (i = rdev_p->rnic_info.pbl_base;
- i <= rdev_p->rnic_info.pbl_top - PBL_CHUNK + 1;
- i += PBL_CHUNK)
- gen_pool_add(rdev_p->pbl_pool, i, PBL_CHUNK, -1);
- return rdev_p->pbl_pool ? 0 : -ENOMEM;
+ if (!rdev_p->pbl_pool)
+ return -ENOMEM;
+
+ pbl_start = rdev_p->rnic_info.pbl_base;
+ pbl_chunk = rdev_p->rnic_info.pbl_top - pbl_start + 1;
+
+ while (pbl_start < rdev_p->rnic_info.pbl_top) {
+ pbl_chunk = min(rdev_p->rnic_info.pbl_top - pbl_start + 1,
+ pbl_chunk);
+ if (gen_pool_add(rdev_p->pbl_pool, pbl_start, pbl_chunk, -1)) {
+ PDBG("%s failed to add PBL chunk (%x/%x)\n",
+ __func__, pbl_start, pbl_chunk);
+ if (pbl_chunk <= 1024 << MIN_PBL_SHIFT) {
+ printk(KERN_WARNING MOD "%s: Failed to add all PBL chunks (%x/%x)\n",
+ __func__, pbl_start, rdev_p->rnic_info.pbl_top - pbl_start);
+ return 0;
+ }
+ pbl_chunk >>= 1;
+ } else {
+ PDBG("%s added PBL chunk (%x/%x)\n",
+ __func__, pbl_start, pbl_chunk);
+ pbl_start += pbl_chunk;
+ }
+ }
+
+ return 0;
}
void cxio_hal_pblpool_destroy(struct cxio_rdev *rdev_p)
#include <rdma/ib_verbs.h>
#include "cxio_hal.h"
+#include "cxio_resource.h"
#include "iwch.h"
#include "iwch_provider.h"
-int iwch_register_mem(struct iwch_dev *rhp, struct iwch_pd *php,
- struct iwch_mr *mhp,
- int shift,
- __be64 *page_list)
+static void iwch_finish_mem_reg(struct iwch_mr *mhp, u32 stag)
{
- u32 stag;
u32 mmid;
+ mhp->attr.state = 1;
+ mhp->attr.stag = stag;
+ mmid = stag >> 8;
+ mhp->ibmr.rkey = mhp->ibmr.lkey = stag;
+ insert_handle(mhp->rhp, &mhp->rhp->mmidr, mhp, mmid);
+ PDBG("%s mmid 0x%x mhp %p\n", __func__, mmid, mhp);
+}
+
+int iwch_register_mem(struct iwch_dev *rhp, struct iwch_pd *php,
+ struct iwch_mr *mhp, int shift)
+{
+ u32 stag;
if (cxio_register_phys_mem(&rhp->rdev,
&stag, mhp->attr.pdid,
mhp->attr.zbva,
mhp->attr.va_fbo,
mhp->attr.len,
- shift-12,
- page_list,
- &mhp->attr.pbl_size, &mhp->attr.pbl_addr))
+ shift - 12,
+ mhp->attr.pbl_size, mhp->attr.pbl_addr))
return -ENOMEM;
- mhp->attr.state = 1;
- mhp->attr.stag = stag;
- mmid = stag >> 8;
- mhp->ibmr.rkey = mhp->ibmr.lkey = stag;
- insert_handle(rhp, &rhp->mmidr, mhp, mmid);
- PDBG("%s mmid 0x%x mhp %p\n", __func__, mmid, mhp);
+
+ iwch_finish_mem_reg(mhp, stag);
+
return 0;
}
int iwch_reregister_mem(struct iwch_dev *rhp, struct iwch_pd *php,
struct iwch_mr *mhp,
int shift,
- __be64 *page_list,
int npages)
{
u32 stag;
- u32 mmid;
-
/* We could support this... */
if (npages > mhp->attr.pbl_size)
mhp->attr.zbva,
mhp->attr.va_fbo,
mhp->attr.len,
- shift-12,
- page_list,
- &mhp->attr.pbl_size, &mhp->attr.pbl_addr))
+ shift - 12,
+ mhp->attr.pbl_size, mhp->attr.pbl_addr))
return -ENOMEM;
- mhp->attr.state = 1;
- mhp->attr.stag = stag;
- mmid = stag >> 8;
- mhp->ibmr.rkey = mhp->ibmr.lkey = stag;
- insert_handle(rhp, &rhp->mmidr, mhp, mmid);
- PDBG("%s mmid 0x%x mhp %p\n", __func__, mmid, mhp);
+
+ iwch_finish_mem_reg(mhp, stag);
+
+ return 0;
+}
+
+int iwch_alloc_pbl(struct iwch_mr *mhp, int npages)
+{
+ mhp->attr.pbl_addr = cxio_hal_pblpool_alloc(&mhp->rhp->rdev,
+ npages << 3);
+
+ if (!mhp->attr.pbl_addr)
+ return -ENOMEM;
+
+ mhp->attr.pbl_size = npages;
+
return 0;
}
+void iwch_free_pbl(struct iwch_mr *mhp)
+{
+ cxio_hal_pblpool_free(&mhp->rhp->rdev, mhp->attr.pbl_addr,
+ mhp->attr.pbl_size << 3);
+}
+
+int iwch_write_pbl(struct iwch_mr *mhp, __be64 *pages, int npages, int offset)
+{
+ return cxio_write_pbl(&mhp->rhp->rdev, pages,
+ mhp->attr.pbl_addr + (offset << 3), npages);
+}
+
int build_phys_page_list(struct ib_phys_buf *buffer_list,
int num_phys_buf,
u64 *iova_start,
mmid = mhp->attr.stag >> 8;
cxio_dereg_mem(&rhp->rdev, mhp->attr.stag, mhp->attr.pbl_size,
mhp->attr.pbl_addr);
+ iwch_free_pbl(mhp);
remove_handle(rhp, &rhp->mmidr, mmid);
if (mhp->kva)
kfree((void *) (unsigned long) mhp->kva);
if (!mhp)
return ERR_PTR(-ENOMEM);
+ mhp->rhp = rhp;
+
/* First check that we have enough alignment */
if ((*iova_start & ~PAGE_MASK) != (buffer_list[0].addr & ~PAGE_MASK)) {
ret = -EINVAL;
if (ret)
goto err;
- mhp->rhp = rhp;
+ ret = iwch_alloc_pbl(mhp, npages);
+ if (ret) {
+ kfree(page_list);
+ goto err_pbl;
+ }
+
+ ret = iwch_write_pbl(mhp, page_list, npages, 0);
+ kfree(page_list);
+ if (ret)
+ goto err_pbl;
+
mhp->attr.pdid = php->pdid;
mhp->attr.zbva = 0;
mhp->attr.len = (u32) total_size;
mhp->attr.pbl_size = npages;
- ret = iwch_register_mem(rhp, php, mhp, shift, page_list);
- kfree(page_list);
- if (ret) {
- goto err;
- }
+ ret = iwch_register_mem(rhp, php, mhp, shift);
+ if (ret)
+ goto err_pbl;
+
return &mhp->ibmr;
+
+err_pbl:
+ iwch_free_pbl(mhp);
+
err:
kfree(mhp);
return ERR_PTR(ret);
return ret;
}
- ret = iwch_reregister_mem(rhp, php, &mh, shift, page_list, npages);
+ ret = iwch_reregister_mem(rhp, php, &mh, shift, npages);
kfree(page_list);
if (ret) {
return ret;
if (!mhp)
return ERR_PTR(-ENOMEM);
+ mhp->rhp = rhp;
+
mhp->umem = ib_umem_get(pd->uobject->context, start, length, acc, 0);
if (IS_ERR(mhp->umem)) {
err = PTR_ERR(mhp->umem);
list_for_each_entry(chunk, &mhp->umem->chunk_list, list)
n += chunk->nents;
- pages = kmalloc(n * sizeof(u64), GFP_KERNEL);
+ err = iwch_alloc_pbl(mhp, n);
+ if (err)
+ goto err;
+
+ pages = (__be64 *) __get_free_page(GFP_KERNEL);
if (!pages) {
err = -ENOMEM;
- goto err;
+ goto err_pbl;
}
i = n = 0;
pages[i++] = cpu_to_be64(sg_dma_address(
&chunk->page_list[j]) +
mhp->umem->page_size * k);
+ if (i == PAGE_SIZE / sizeof *pages) {
+ err = iwch_write_pbl(mhp, pages, i, n);
+ if (err)
+ goto pbl_done;
+ n += i;
+ i = 0;
+ }
}
}
- mhp->rhp = rhp;
+ if (i)
+ err = iwch_write_pbl(mhp, pages, i, n);
+
+pbl_done:
+ free_page((unsigned long) pages);
+ if (err)
+ goto err_pbl;
+
mhp->attr.pdid = php->pdid;
mhp->attr.zbva = 0;
mhp->attr.perms = iwch_ib_to_tpt_access(acc);
mhp->attr.va_fbo = virt;
mhp->attr.page_size = shift - 12;
mhp->attr.len = (u32) length;
- mhp->attr.pbl_size = i;
- err = iwch_register_mem(rhp, php, mhp, shift, pages);
- kfree(pages);
+
+ err = iwch_register_mem(rhp, php, mhp, shift);
if (err)
- goto err;
+ goto err_pbl;
if (udata && !t3a_device(rhp)) {
uresp.pbl_addr = (mhp->attr.pbl_addr -
- rhp->rdev.rnic_info.pbl_base) >> 3;
+ rhp->rdev.rnic_info.pbl_base) >> 3;
PDBG("%s user resp pbl_addr 0x%x\n", __func__,
uresp.pbl_addr);
return &mhp->ibmr;
+err_pbl:
+ iwch_free_pbl(mhp);
+
err:
ib_umem_release(mhp->umem);
kfree(mhp);
int iwch_resume_qps(struct iwch_cq *chp);
void stop_read_rep_timer(struct iwch_qp *qhp);
int iwch_register_mem(struct iwch_dev *rhp, struct iwch_pd *php,
- struct iwch_mr *mhp,
- int shift,
- __be64 *page_list);
+ struct iwch_mr *mhp, int shift);
int iwch_reregister_mem(struct iwch_dev *rhp, struct iwch_pd *php,
struct iwch_mr *mhp,
int shift,
- __be64 *page_list,
int npages);
+int iwch_alloc_pbl(struct iwch_mr *mhp, int npages);
+void iwch_free_pbl(struct iwch_mr *mhp);
+int iwch_write_pbl(struct iwch_mr *mhp, __be64 *pages, int npages, int offset);
int build_phys_page_list(struct ib_phys_buf *buffer_list,
int num_phys_buf,
u64 *iova_start,
int mtu_shift;
u32 message_count;
u32 packet_count;
+ atomic_t nr_events; /* events seen */
+ wait_queue_head_t wait_completion;
};
#define IS_SRQ(qp) (qp->ext_type == EQPT_SRQ)
read_lock(&ehca_qp_idr_lock);
qp = idr_find(&ehca_qp_idr, token);
+ if (qp)
+ atomic_inc(&qp->nr_events);
read_unlock(&ehca_qp_idr_lock);
if (!qp)
if (fatal && qp->ext_type == EQPT_SRQBASE)
dispatch_qp_event(shca, qp, IB_EVENT_QP_LAST_WQE_REACHED);
+ if (atomic_dec_and_test(&qp->nr_events))
+ wake_up(&qp->wait_completion);
return;
}
return ERR_PTR(-ENOMEM);
}
+ atomic_set(&my_qp->nr_events, 0);
+ init_waitqueue_head(&my_qp->wait_completion);
spin_lock_init(&my_qp->spinlock_s);
spin_lock_init(&my_qp->spinlock_r);
my_qp->qp_type = qp_type;
idr_remove(&ehca_qp_idr, my_qp->token);
write_unlock_irqrestore(&ehca_qp_idr_lock, flags);
+ /* now wait until all pending events have completed */
+ wait_event(my_qp->wait_completion, !atomic_read(&my_qp->nr_events));
+
h_ret = hipz_h_destroy_qp(shca->ipz_hca_handle, my_qp);
if (h_ret != H_SUCCESS) {
ehca_err(dev, "hipz_h_destroy_qp() failed h_ret=%li "
}
reloop:
- for (last = 0, i = 1; !last; i++) {
+ for (last = 0, i = 1; !last; i += !last) {
hdr = dd->ipath_f_get_msgheader(dd, rhf_addr);
eflags = ipath_hdrget_err_flags(rhf_addr);
etype = ipath_hdrget_rcv_type(rhf_addr);
spin_unlock_irqrestore(&ipath_pioavail_lock, flags);
}
+/*
+ * used to force update of pioavailshadow if we can't get a pio buffer.
+ * Needed primarily due to exitting freeze mode after recovering
+ * from errors. Done lazily, because it's safer (known to not
+ * be writing pio buffers).
+ */
+static void ipath_reset_availshadow(struct ipath_devdata *dd)
+{
+ int i, im;
+ unsigned long flags;
+
+ spin_lock_irqsave(&ipath_pioavail_lock, flags);
+ for (i = 0; i < dd->ipath_pioavregs; i++) {
+ u64 val, oldval;
+ /* deal with 6110 chip bug on high register #s */
+ im = (i > 3 && (dd->ipath_flags & IPATH_SWAP_PIOBUFS)) ?
+ i ^ 1 : i;
+ val = le64_to_cpu(dd->ipath_pioavailregs_dma[im]);
+ /*
+ * busy out the buffers not in the kernel avail list,
+ * without changing the generation bits.
+ */
+ oldval = dd->ipath_pioavailshadow[i];
+ dd->ipath_pioavailshadow[i] = val |
+ ((~dd->ipath_pioavailkernel[i] <<
+ INFINIPATH_SENDPIOAVAIL_BUSY_SHIFT) &
+ 0xaaaaaaaaaaaaaaaaULL); /* All BUSY bits in qword */
+ if (oldval != dd->ipath_pioavailshadow[i])
+ ipath_dbg("shadow[%d] was %Lx, now %lx\n",
+ i, oldval, dd->ipath_pioavailshadow[i]);
+ }
+ spin_unlock_irqrestore(&ipath_pioavail_lock, flags);
+}
+
/**
* ipath_setrcvhdrsize - set the receive header size
* @dd: the infinipath device
*/
ipath_stats.sps_nopiobufs++;
if (!(++dd->ipath_consec_nopiobuf % 100000)) {
- ipath_dbg("%u pio sends with no bufavail; dmacopy: "
- "%llx %llx %llx %llx; shadow: %lx %lx %lx %lx\n",
+ ipath_force_pio_avail_update(dd); /* at start */
+ ipath_dbg("%u tries no piobufavail ts%lx; dmacopy: "
+ "%llx %llx %llx %llx\n"
+ "ipath shadow: %lx %lx %lx %lx\n",
dd->ipath_consec_nopiobuf,
+ (unsigned long)get_cycles(),
(unsigned long long) le64_to_cpu(dma[0]),
(unsigned long long) le64_to_cpu(dma[1]),
(unsigned long long) le64_to_cpu(dma[2]),
*/
if ((dd->ipath_piobcnt2k + dd->ipath_piobcnt4k) >
(sizeof(shadow[0]) * 4 * 4))
- ipath_dbg("2nd group: dmacopy: %llx %llx "
- "%llx %llx; shadow: %lx %lx %lx %lx\n",
+ ipath_dbg("2nd group: dmacopy: "
+ "%llx %llx %llx %llx\n"
+ "ipath shadow: %lx %lx %lx %lx\n",
(unsigned long long)le64_to_cpu(dma[4]),
(unsigned long long)le64_to_cpu(dma[5]),
(unsigned long long)le64_to_cpu(dma[6]),
(unsigned long long)le64_to_cpu(dma[7]),
- shadow[4], shadow[5], shadow[6],
- shadow[7]);
+ shadow[4], shadow[5], shadow[6], shadow[7]);
+
+ /* at end, so update likely happened */
+ ipath_reset_availshadow(dd);
}
}
unsigned len, int avail)
{
unsigned long flags;
- unsigned end;
+ unsigned end, cnt = 0, next;
/* There are two bits per send buffer (busy and generation) */
start *= 2;
- len *= 2;
- end = start + len;
+ end = start + len * 2;
- /* Set or clear the generation bits. */
spin_lock_irqsave(&ipath_pioavail_lock, flags);
+ /* Set or clear the busy bit in the shadow. */
while (start < end) {
if (avail) {
- __clear_bit(start + INFINIPATH_SENDPIOAVAIL_BUSY_SHIFT,
- dd->ipath_pioavailshadow);
+ unsigned long dma;
+ int i, im;
+ /*
+ * the BUSY bit will never be set, because we disarm
+ * the user buffers before we hand them back to the
+ * kernel. We do have to make sure the generation
+ * bit is set correctly in shadow, since it could
+ * have changed many times while allocated to user.
+ * We can't use the bitmap functions on the full
+ * dma array because it is always little-endian, so
+ * we have to flip to host-order first.
+ * BITS_PER_LONG is slightly wrong, since it's
+ * always 64 bits per register in chip...
+ * We only work on 64 bit kernels, so that's OK.
+ */
+ /* deal with 6110 chip bug on high register #s */
+ i = start / BITS_PER_LONG;
+ im = (i > 3 && (dd->ipath_flags & IPATH_SWAP_PIOBUFS)) ?
+ i ^ 1 : i;
+ __clear_bit(INFINIPATH_SENDPIOAVAIL_BUSY_SHIFT
+ + start, dd->ipath_pioavailshadow);
+ dma = (unsigned long) le64_to_cpu(
+ dd->ipath_pioavailregs_dma[im]);
+ if (test_bit((INFINIPATH_SENDPIOAVAIL_CHECK_SHIFT
+ + start) % BITS_PER_LONG, &dma))
+ __set_bit(INFINIPATH_SENDPIOAVAIL_CHECK_SHIFT
+ + start, dd->ipath_pioavailshadow);
+ else
+ __clear_bit(INFINIPATH_SENDPIOAVAIL_CHECK_SHIFT
+ + start, dd->ipath_pioavailshadow);
__set_bit(start, dd->ipath_pioavailkernel);
} else {
__set_bit(start + INFINIPATH_SENDPIOAVAIL_BUSY_SHIFT,
}
start += 2;
}
+
+ if (dd->ipath_pioupd_thresh) {
+ end = 2 * (dd->ipath_piobcnt2k + dd->ipath_piobcnt4k);
+ next = find_first_bit(dd->ipath_pioavailkernel, end);
+ while (next < end) {
+ cnt++;
+ next = find_next_bit(dd->ipath_pioavailkernel, end,
+ next + 1);
+ }
+ }
spin_unlock_irqrestore(&ipath_pioavail_lock, flags);
+
+ /*
+ * When moving buffers from kernel to user, if number assigned to
+ * the user is less than the pio update threshold, and threshold
+ * is supported (cnt was computed > 0), drop the update threshold
+ * so we update at least once per allocated number of buffers.
+ * In any case, if the kernel buffers are less than the threshold,
+ * drop the threshold. We don't bother increasing it, having once
+ * decreased it, since it would typically just cycle back and forth.
+ * If we don't decrease below buffers in use, we can wait a long
+ * time for an update, until some other context uses PIO buffers.
+ */
+ if (!avail && len < cnt)
+ cnt = len;
+ if (cnt < dd->ipath_pioupd_thresh) {
+ dd->ipath_pioupd_thresh = cnt;
+ ipath_dbg("Decreased pio update threshold to %u\n",
+ dd->ipath_pioupd_thresh);
+ spin_lock_irqsave(&dd->ipath_sendctrl_lock, flags);
+ dd->ipath_sendctrl &= ~(INFINIPATH_S_UPDTHRESH_MASK
+ << INFINIPATH_S_UPDTHRESH_SHIFT);
+ dd->ipath_sendctrl |= dd->ipath_pioupd_thresh
+ << INFINIPATH_S_UPDTHRESH_SHIFT;
+ ipath_write_kreg(dd, dd->ipath_kregs->kr_sendctrl,
+ dd->ipath_sendctrl);
+ spin_unlock_irqrestore(&dd->ipath_sendctrl_lock, flags);
+ }
}
/**
spin_lock_irqsave(&dd->ipath_sdma_lock, flags);
skip_cancel =
- !test_bit(IPATH_SDMA_DISABLED, statp) &&
- test_and_set_bit(IPATH_SDMA_ABORTING, statp);
+ test_and_set_bit(IPATH_SDMA_ABORTING, statp)
+ && !test_bit(IPATH_SDMA_DISABLED, statp);
spin_unlock_irqrestore(&dd->ipath_sdma_lock, flags);
if (skip_cancel)
goto bail;
ipath_disarm_piobufs(dd, 0,
dd->ipath_piobcnt2k + dd->ipath_piobcnt4k);
+ if (dd->ipath_flags & IPATH_HAS_SEND_DMA)
+ set_bit(IPATH_SDMA_DISARMED, &dd->ipath_sdma_status);
+
if (restore_sendctrl) {
/* else done by caller later if needed */
spin_lock_irqsave(&dd->ipath_sendctrl_lock, flags);
/* only wait so long for intr */
dd->ipath_sdma_abort_intr_timeout = jiffies + HZ;
dd->ipath_sdma_reset_wait = 200;
- __set_bit(IPATH_SDMA_DISARMED, &dd->ipath_sdma_status);
if (!test_bit(IPATH_SDMA_SHUTDOWN, &dd->ipath_sdma_status))
tasklet_hi_schedule(&dd->ipath_sdma_abort_task);
spin_unlock_irqrestore(&dd->ipath_sdma_lock, flags);
(void *) dd->ipath_statusp -
(void *) dd->ipath_pioavailregs_dma;
if (!shared) {
- kinfo->spi_piocnt = dd->ipath_pbufsport;
+ kinfo->spi_piocnt = pd->port_piocnt;
kinfo->spi_piobufbase = (u64) pd->port_piobufs;
kinfo->__spi_uregbase = (u64) dd->ipath_uregbase +
dd->ipath_ureg_align * pd->port_port;
} else if (master) {
- kinfo->spi_piocnt = (dd->ipath_pbufsport / subport_cnt) +
- (dd->ipath_pbufsport % subport_cnt);
+ kinfo->spi_piocnt = (pd->port_piocnt / subport_cnt) +
+ (pd->port_piocnt % subport_cnt);
/* Master's PIO buffers are after all the slave's */
kinfo->spi_piobufbase = (u64) pd->port_piobufs +
dd->ipath_palign *
- (dd->ipath_pbufsport - kinfo->spi_piocnt);
+ (pd->port_piocnt - kinfo->spi_piocnt);
} else {
unsigned slave = subport_fp(fp) - 1;
- kinfo->spi_piocnt = dd->ipath_pbufsport / subport_cnt;
+ kinfo->spi_piocnt = pd->port_piocnt / subport_cnt;
kinfo->spi_piobufbase = (u64) pd->port_piobufs +
dd->ipath_palign * kinfo->spi_piocnt * slave;
}
- /*
- * Set the PIO avail update threshold to no larger
- * than the number of buffers per process. Note that
- * we decrease it here, but won't ever increase it.
- */
- if (dd->ipath_pioupd_thresh &&
- kinfo->spi_piocnt < dd->ipath_pioupd_thresh) {
- unsigned long flags;
-
- dd->ipath_pioupd_thresh = kinfo->spi_piocnt;
- ipath_dbg("Decreased pio update threshold to %u\n",
- dd->ipath_pioupd_thresh);
- spin_lock_irqsave(&dd->ipath_sendctrl_lock, flags);
- dd->ipath_sendctrl &= ~(INFINIPATH_S_UPDTHRESH_MASK
- << INFINIPATH_S_UPDTHRESH_SHIFT);
- dd->ipath_sendctrl |= dd->ipath_pioupd_thresh
- << INFINIPATH_S_UPDTHRESH_SHIFT;
- ipath_write_kreg(dd, dd->ipath_kregs->kr_sendctrl,
- dd->ipath_sendctrl);
- spin_unlock_irqrestore(&dd->ipath_sendctrl_lock, flags);
- }
-
if (shared) {
kinfo->spi_port_uregbase = (u64) dd->ipath_uregbase +
dd->ipath_ureg_align * pd->port_port;
ureg = dd->ipath_uregbase + dd->ipath_ureg_align * pd->port_port;
if (!pd->port_subport_cnt) {
/* port is not shared */
- piocnt = dd->ipath_pbufsport;
+ piocnt = pd->port_piocnt;
piobufs = pd->port_piobufs;
} else if (!subport_fp(fp)) {
/* caller is the master */
- piocnt = (dd->ipath_pbufsport / pd->port_subport_cnt) +
- (dd->ipath_pbufsport % pd->port_subport_cnt);
+ piocnt = (pd->port_piocnt / pd->port_subport_cnt) +
+ (pd->port_piocnt % pd->port_subport_cnt);
piobufs = pd->port_piobufs +
- dd->ipath_palign * (dd->ipath_pbufsport - piocnt);
+ dd->ipath_palign * (pd->port_piocnt - piocnt);
} else {
unsigned slave = subport_fp(fp) - 1;
/* caller is a slave */
- piocnt = dd->ipath_pbufsport / pd->port_subport_cnt;
+ piocnt = pd->port_piocnt / pd->port_subport_cnt;
piobufs = pd->port_piobufs + dd->ipath_palign * piocnt * slave;
}
port_fp(fp) = pd;
pd->port_pid = current->pid;
strncpy(pd->port_comm, current->comm, sizeof(pd->port_comm));
- ipath_chg_pioavailkernel(dd,
- dd->ipath_pbufsport * (pd->port_port - 1),
- dd->ipath_pbufsport, 0);
ipath_stats.sps_ports++;
ret = 0;
} else
/* for now we do nothing with rcvhdrcnt: uinfo->spu_rcvhdrcnt */
+ /* some ports may get extra buffers, calculate that here */
+ if (pd->port_port <= dd->ipath_ports_extrabuf)
+ pd->port_piocnt = dd->ipath_pbufsport + 1;
+ else
+ pd->port_piocnt = dd->ipath_pbufsport;
+
/* for right now, kernel piobufs are at end, so port 1 is at 0 */
+ if (pd->port_port <= dd->ipath_ports_extrabuf)
+ pd->port_pio_base = (dd->ipath_pbufsport + 1)
+ * (pd->port_port - 1);
+ else
+ pd->port_pio_base = dd->ipath_ports_extrabuf +
+ dd->ipath_pbufsport * (pd->port_port - 1);
pd->port_piobufs = dd->ipath_piobufbase +
- dd->ipath_pbufsport * (pd->port_port - 1) * dd->ipath_palign;
- ipath_cdbg(VERBOSE, "Set base of piobufs for port %u to 0x%x\n",
- pd->port_port, pd->port_piobufs);
+ pd->port_pio_base * dd->ipath_palign;
+ ipath_cdbg(VERBOSE, "piobuf base for port %u is 0x%x, piocnt %u,"
+ " first pio %u\n", pd->port_port, pd->port_piobufs,
+ pd->port_piocnt, pd->port_pio_base);
+ ipath_chg_pioavailkernel(dd, pd->port_pio_base, pd->port_piocnt, 0);
/*
* Now allocate the rcvhdr Q and eager TIDs; skip the TID
}
if (dd->ipath_kregbase) {
- int i;
/* atomically clear receive enable port and intr avail. */
clear_bit(dd->ipath_r_portenable_shift + port,
&dd->ipath_rcvctrl);
ipath_write_kreg_port(dd, dd->ipath_kregs->kr_rcvhdraddr,
pd->port_port, dd->ipath_dummy_hdrq_phys);
- i = dd->ipath_pbufsport * (port - 1);
- ipath_disarm_piobufs(dd, i, dd->ipath_pbufsport);
- ipath_chg_pioavailkernel(dd, i, dd->ipath_pbufsport, 1);
+ ipath_disarm_piobufs(dd, pd->port_pio_base, pd->port_piocnt);
+ ipath_chg_pioavailkernel(dd, pd->port_pio_base,
+ pd->port_piocnt, 1);
dd->ipath_f_clear_tids(dd, pd->port_port);
dev_info(&dd->pcidev->dev,
"Recovering from TXE PIO parity error\n");
- ipath_disarm_senderrbufs(dd, 1);
+ ipath_disarm_senderrbufs(dd);
}
ctrl = ipath_read_kreg32(dd, dd->ipath_kregs->kr_control);
if ((ctrl & INFINIPATH_C_FREEZEMODE) && !ipath_diag_inuse) {
/*
- * Parity errors in send memory are recoverable,
- * just cancel the send (if indicated in * sendbuffererror),
- * count the occurrence, unfreeze (if no other handled
- * hardware error bits are set), and continue.
+ * Parity errors in send memory are recoverable by h/w
+ * just do housekeeping, exit freeze mode and continue.
*/
if (hwerrs & ((INFINIPATH_HWE_TXEMEMPARITYERR_PIOBUF |
INFINIPATH_HWE_TXEMEMPARITYERR_PIOPBC)
hwerrs &= ~((INFINIPATH_HWE_TXEMEMPARITYERR_PIOBUF |
INFINIPATH_HWE_TXEMEMPARITYERR_PIOPBC)
<< INFINIPATH_HWE_TXEMEMPARITYERR_SHIFT);
- if (!hwerrs) {
- /* else leave in freeze mode */
- ipath_write_kreg(dd,
- dd->ipath_kregs->kr_control,
- dd->ipath_control);
- goto bail;
- }
}
if (hwerrs) {
/*
*dd->ipath_statusp |= IPATH_STATUS_HWERROR;
dd->ipath_flags &= ~IPATH_INITTED;
} else {
- ipath_dbg("Clearing freezemode on ignored hardware "
- "error\n");
+ ipath_dbg("Clearing freezemode on ignored or "
+ "recovered hardware error\n");
ipath_clear_freeze(dd);
}
}
"revision %u.%u!\n",
dd->ipath_majrev, dd->ipath_minrev);
ret = 1;
- } else if (dd->ipath_minrev == 1) {
- /* Rev1 chips are prototype. Complain, but allow use */
+ } else if (dd->ipath_minrev == 1 &&
+ !(dd->ipath_flags & IPATH_INITTED)) {
+ /* Rev1 chips are prototype. Complain at init, but allow use */
ipath_dev_err(dd, "Unsupported hardware "
"revision %u.%u, Contact support@qlogic.com\n",
dd->ipath_majrev, dd->ipath_minrev);
dd->ipath_rcvctrl);
dd->ipath_p0_rcvegrcnt = 2048; /* always */
if (dd->ipath_flags & IPATH_HAS_SEND_DMA)
- dd->ipath_pioreserved = 1; /* reserve a buffer */
+ dd->ipath_pioreserved = 3; /* kpiobufs used for PIO */
}
/*
* min buffers we want to have per port, after driver
*/
-#define IPATH_MIN_USER_PORT_BUFCNT 8
+#define IPATH_MIN_USER_PORT_BUFCNT 7
/*
* Number of ports we are configured to use (to allow for more pio
/*
* Number of buffers reserved for driver (verbs and layered drivers.)
- * Reserved at end of buffer list. Initialized based on
- * number of PIO buffers if not set via module interface.
+ * Initialized based on number of PIO buffers if not set via module interface.
* The problem with this is that it's global, but we'll use different
- * numbers for different chip types. So the default value is not
- * very useful. I've redefined it for the 1.3 release so that it's
- * zero unless set by the user to something else, in which case we
- * try to respect it.
+ * numbers for different chip types.
*/
static ushort ipath_kpiobufs;
pioavail = dd->ipath_pioavailregs_dma[i ^ 1];
else
pioavail = dd->ipath_pioavailregs_dma[i];
- dd->ipath_pioavailshadow[i] = le64_to_cpu(pioavail) |
- (~dd->ipath_pioavailkernel[i] <<
- INFINIPATH_SENDPIOAVAIL_BUSY_SHIFT);
+ /*
+ * don't need to worry about ipath_pioavailkernel here
+ * because we will call ipath_chg_pioavailkernel() later
+ * in initialization, to busy out buffers as needed
+ */
+ dd->ipath_pioavailshadow[i] = le64_to_cpu(pioavail);
}
/* can get counters, stats, etc. */
dd->ipath_flags |= IPATH_PRESENT;
int ipath_init_chip(struct ipath_devdata *dd, int reinit)
{
int ret = 0;
- u32 val32, kpiobufs;
+ u32 kpiobufs, defkbufs;
u32 piobufs, uports;
u64 val;
struct ipath_portdata *pd;
gfp_t gfp_flags = GFP_USER | __GFP_COMP;
- unsigned long flags;
ret = init_housekeeping(dd, reinit);
if (ret)
dd->ipath_pioavregs = ALIGN(piobufs, sizeof(u64) * BITS_PER_BYTE / 2)
/ (sizeof(u64) * BITS_PER_BYTE / 2);
uports = dd->ipath_cfgports ? dd->ipath_cfgports - 1 : 0;
- if (ipath_kpiobufs == 0) {
- /* not set by user (this is default) */
- if (piobufs > 144)
- kpiobufs = 32;
- else
- kpiobufs = 16;
- }
+ if (piobufs > 144)
+ defkbufs = 32 + dd->ipath_pioreserved;
else
- kpiobufs = ipath_kpiobufs;
+ defkbufs = 16 + dd->ipath_pioreserved;
- if (kpiobufs + (uports * IPATH_MIN_USER_PORT_BUFCNT) > piobufs) {
+ if (ipath_kpiobufs && (ipath_kpiobufs +
+ (uports * IPATH_MIN_USER_PORT_BUFCNT)) > piobufs) {
int i = (int) piobufs -
(int) (uports * IPATH_MIN_USER_PORT_BUFCNT);
if (i < 1)
i = 1;
dev_info(&dd->pcidev->dev, "Allocating %d PIO bufs of "
"%d for kernel leaves too few for %d user ports "
- "(%d each); using %u\n", kpiobufs,
+ "(%d each); using %u\n", ipath_kpiobufs,
piobufs, uports, IPATH_MIN_USER_PORT_BUFCNT, i);
/*
* shouldn't change ipath_kpiobufs, because could be
* different for different devices...
*/
kpiobufs = i;
- }
+ } else if (ipath_kpiobufs)
+ kpiobufs = ipath_kpiobufs;
+ else
+ kpiobufs = defkbufs;
dd->ipath_lastport_piobuf = piobufs - kpiobufs;
dd->ipath_pbufsport =
uports ? dd->ipath_lastport_piobuf / uports : 0;
- val32 = dd->ipath_lastport_piobuf - (dd->ipath_pbufsport * uports);
- if (val32 > 0) {
- ipath_dbg("allocating %u pbufs/port leaves %u unused, "
- "add to kernel\n", dd->ipath_pbufsport, val32);
- dd->ipath_lastport_piobuf -= val32;
- kpiobufs += val32;
- ipath_dbg("%u pbufs/port leaves %u unused, add to kernel\n",
- dd->ipath_pbufsport, val32);
- }
+ /* if not an even divisor, some user ports get extra buffers */
+ dd->ipath_ports_extrabuf = dd->ipath_lastport_piobuf -
+ (dd->ipath_pbufsport * uports);
+ if (dd->ipath_ports_extrabuf)
+ ipath_dbg("%u pbufs/port leaves some unused, add 1 buffer to "
+ "ports <= %u\n", dd->ipath_pbufsport,
+ dd->ipath_ports_extrabuf);
dd->ipath_lastpioindex = 0;
dd->ipath_lastpioindexl = dd->ipath_piobcnt2k;
- ipath_chg_pioavailkernel(dd, 0, piobufs, 1);
+ /* ipath_pioavailshadow initialized earlier */
ipath_cdbg(VERBOSE, "%d PIO bufs for kernel out of %d total %u "
"each for %u user ports\n", kpiobufs,
piobufs, dd->ipath_pbufsport, uports);
- if (dd->ipath_pioupd_thresh) {
- if (dd->ipath_pbufsport < dd->ipath_pioupd_thresh)
- dd->ipath_pioupd_thresh = dd->ipath_pbufsport;
- if (kpiobufs < dd->ipath_pioupd_thresh)
- dd->ipath_pioupd_thresh = kpiobufs;
- }
-
ret = dd->ipath_f_early_init(dd);
if (ret) {
ipath_dev_err(dd, "Early initialization failure\n");
goto done;
}
- /*
- * Cancel any possible active sends from early driver load.
- * Follows early_init because some chips have to initialize
- * PIO buffers in early_init to avoid false parity errors.
- */
- ipath_cancel_sends(dd, 0);
-
/*
* Early_init sets rcvhdrentsize and rcvhdrsize, so this must be
* done after early_init.
ipath_write_kreg(dd, dd->ipath_kregs->kr_sendpioavailaddr,
dd->ipath_pioavailregs_phys);
+
/*
* this is to detect s/w errors, which the h/w works around by
* ignoring the low 6 bits of address, if it wasn't aligned.
~0ULL&~INFINIPATH_HWE_MEMBISTFAILED);
ipath_write_kreg(dd, dd->ipath_kregs->kr_control, 0ULL);
- spin_lock_irqsave(&dd->ipath_sendctrl_lock, flags);
- dd->ipath_sendctrl = INFINIPATH_S_PIOENABLE;
- ipath_write_kreg(dd, dd->ipath_kregs->kr_sendctrl, dd->ipath_sendctrl);
- ipath_read_kreg64(dd, dd->ipath_kregs->kr_scratch);
- spin_unlock_irqrestore(&dd->ipath_sendctrl_lock, flags);
-
/*
* before error clears, since we expect serdes pll errors during
* this, the first time after reset
else
enable_chip(dd, reinit);
+ /* after enable_chip, so pioavailshadow setup */
+ ipath_chg_pioavailkernel(dd, 0, piobufs, 1);
+
+ /*
+ * Cancel any possible active sends from early driver load.
+ * Follows early_init because some chips have to initialize
+ * PIO buffers in early_init to avoid false parity errors.
+ * After enable and ipath_chg_pioavailkernel so we can safely
+ * enable pioavail updates and PIOENABLE; packets are now
+ * ready to go out.
+ */
+ ipath_cancel_sends(dd, 1);
+
if (!reinit) {
/*
* Used when we close a port, for DMA already in flight
#include "ipath_verbs.h"
#include "ipath_common.h"
-/*
- * clear (write) a pio buffer, to clear a parity error. This routine
- * should only be called when in freeze mode, and the buffer should be
- * canceled afterwards.
- */
-static void ipath_clrpiobuf(struct ipath_devdata *dd, u32 pnum)
-{
- u32 __iomem *pbuf;
- u32 dwcnt; /* dword count to write */
- if (pnum < dd->ipath_piobcnt2k) {
- pbuf = (u32 __iomem *) (dd->ipath_pio2kbase + pnum *
- dd->ipath_palign);
- dwcnt = dd->ipath_piosize2k >> 2;
- }
- else {
- pbuf = (u32 __iomem *) (dd->ipath_pio4kbase +
- (pnum - dd->ipath_piobcnt2k) * dd->ipath_4kalign);
- dwcnt = dd->ipath_piosize4k >> 2;
- }
- dev_info(&dd->pcidev->dev,
- "Rewrite PIO buffer %u, to recover from parity error\n",
- pnum);
-
- /* no flush required, since already in freeze */
- writel(dwcnt + 1, pbuf);
- while (--dwcnt)
- writel(0, pbuf++);
-}
/*
* Called when we might have an error that is specific to a particular
* PIO buffer, and may need to cancel that buffer, so it can be re-used.
- * If rewrite is true, and bits are set in the sendbufferror registers,
- * we'll write to the buffer, for error recovery on parity errors.
*/
-void ipath_disarm_senderrbufs(struct ipath_devdata *dd, int rewrite)
+void ipath_disarm_senderrbufs(struct ipath_devdata *dd)
{
u32 piobcnt;
unsigned long sbuf[4];
}
for (i = 0; i < piobcnt; i++)
- if (test_bit(i, sbuf)) {
- if (rewrite)
- ipath_clrpiobuf(dd, i);
+ if (test_bit(i, sbuf))
ipath_disarm_piobufs(dd, i, 1);
- }
/* ignore armlaunch errs for a bit */
dd->ipath_lastcancel = jiffies+3;
}
{
u64 ignore_this_time = 0;
- ipath_disarm_senderrbufs(dd, 0);
+ ipath_disarm_senderrbufs(dd);
if ((errs & E_SUM_LINK_PKTERRS) &&
!(dd->ipath_flags & IPATH_LINKACTIVE)) {
/*
* processes (causing armlaunch), send errors due to going into freeze mode,
* etc., and try to avoid causing extra interrupts while doing so.
* Forcibly update the in-memory pioavail register copies after cleanup
- * because the chip won't do it for anything changing while in freeze mode
- * (we don't want to wait for the next pio buffer state change).
+ * because the chip won't do it while in freeze mode (the register values
+ * themselves are kept correct).
* Make sure that we don't lose any important interrupts by using the chip
* feature that says that writing 0 to a bit in *clear that is set in
* *status will cause an interrupt to be generated again (if allowed by
*/
void ipath_clear_freeze(struct ipath_devdata *dd)
{
- int i, im;
- u64 val;
-
/* disable error interrupts, to avoid confusion */
ipath_write_kreg(dd, dd->ipath_kregs->kr_errormask, 0ULL);
/* also disable interrupts; errormask is sometimes overwriten */
ipath_write_kreg(dd, dd->ipath_kregs->kr_intmask, 0ULL);
- /*
- * clear all sends, because they have may been
- * completed by usercode while in freeze mode, and
- * therefore would not be sent, and eventually
- * might cause the process to run out of bufs
- */
- ipath_cancel_sends(dd, 0);
+ ipath_cancel_sends(dd, 1);
+
+ /* clear the freeze, and be sure chip saw it */
ipath_write_kreg(dd, dd->ipath_kregs->kr_control,
dd->ipath_control);
+ ipath_read_kreg64(dd, dd->ipath_kregs->kr_scratch);
- /* ensure pio avail updates continue */
+ /* force in-memory update now we are out of freeze */
ipath_force_pio_avail_update(dd);
- /*
- * We just enabled pioavailupdate, so dma copy is almost certainly
- * not yet right, so read the registers directly. Similar to init
- */
- for (i = 0; i < dd->ipath_pioavregs; i++) {
- /* deal with 6110 chip bug */
- im = (i > 3 && (dd->ipath_flags & IPATH_SWAP_PIOBUFS)) ?
- i ^ 1 : i;
- val = ipath_read_kreg64(dd, (0x1000 / sizeof(u64)) + im);
- dd->ipath_pioavailregs_dma[i] = cpu_to_le64(val);
- dd->ipath_pioavailshadow[i] = val |
- (~dd->ipath_pioavailkernel[i] <<
- INFINIPATH_SENDPIOAVAIL_BUSY_SHIFT);
- }
-
/*
* force new interrupt if any hwerr, error or interrupt bits are
* still set, and clear "safe" send packet errors related to freeze
ipath_read_kreg64(dd, dd->ipath_kregs->kr_scratch);
spin_unlock_irqrestore(&dd->ipath_sendctrl_lock, flags);
- if (!(dd->ipath_flags & IPATH_HAS_SEND_DMA))
- handle_layer_pioavail(dd);
- else
- ipath_dbg("unexpected BUFAVAIL intr\n");
+ /* always process; sdma verbs uses PIO for acks and VL15 */
+ handle_layer_pioavail(dd);
}
ret = IRQ_HANDLED;
u16 port_subport_cnt;
/* non-zero if port is being shared. */
u16 port_subport_id;
+ /* number of pio bufs for this port (all procs, if shared) */
+ u32 port_piocnt;
+ /* first pio buffer for this port */
+ u32 port_pio_base;
/* chip offset of PIO buffers for this port */
u32 port_piobufs;
/* how many alloc_pages() chunks in port_rcvegrbuf_pages */
u32 ipath_lastrpkts;
/* pio bufs allocated per port */
u32 ipath_pbufsport;
+ /* if remainder on bufs/port, ports < extrabuf get 1 extra */
+ u32 ipath_ports_extrabuf;
u32 ipath_pioupd_thresh; /* update threshold, some chips */
/*
* number of ports configured as max; zero is set to number chip
int ipath_update_eeprom_log(struct ipath_devdata *dd);
void ipath_inc_eeprom_err(struct ipath_devdata *dd, u32 eidx, u32 incr);
u64 ipath_snap_cntr(struct ipath_devdata *, ipath_creg);
-void ipath_disarm_senderrbufs(struct ipath_devdata *, int);
+void ipath_disarm_senderrbufs(struct ipath_devdata *);
void ipath_force_pio_avail_update(struct ipath_devdata *);
void signal_ib_event(struct ipath_devdata *dd, enum ib_event_type ev);
qp->r_wrid_valid = 0;
wc.wr_id = qp->r_wr_id;
wc.status = IB_WC_SUCCESS;
- wc.opcode = IB_WC_RECV;
+ if (opcode == OP(RDMA_WRITE_LAST_WITH_IMMEDIATE) ||
+ opcode == OP(RDMA_WRITE_ONLY_WITH_IMMEDIATE))
+ wc.opcode = IB_WC_RECV_RDMA_WITH_IMM;
+ else
+ wc.opcode = IB_WC_RECV;
wc.vendor_err = 0;
wc.qp = &qp->ibqp;
wc.src_qp = qp->remote_qpn;
wake_up(&qp->wait);
}
-static void want_buffer(struct ipath_devdata *dd)
+static void want_buffer(struct ipath_devdata *dd, struct ipath_qp *qp)
{
- if (!(dd->ipath_flags & IPATH_HAS_SEND_DMA)) {
+ if (!(dd->ipath_flags & IPATH_HAS_SEND_DMA) ||
+ qp->ibqp.qp_type == IB_QPT_SMI) {
unsigned long flags;
spin_lock_irqsave(&dd->ipath_sendctrl_lock, flags);
spin_lock_irqsave(&dev->pending_lock, flags);
list_add_tail(&qp->piowait, &dev->piowait);
spin_unlock_irqrestore(&dev->pending_lock, flags);
- want_buffer(dev->dd);
+ want_buffer(dev->dd, qp);
dev->n_piowait++;
}
spin_unlock_irqrestore(&dd->ipath_sdma_lock, flags);
/*
- * Don't restart sdma here. Wait until link is up to ACTIVE.
- * VL15 MADs used to bring the link up use PIO, and multiple
- * link transitions otherwise cause the sdma engine to be
+ * Don't restart sdma here (with the exception
+ * below). Wait until link is up to ACTIVE. VL15 MADs
+ * used to bring the link up use PIO, and multiple link
+ * transitions otherwise cause the sdma engine to be
* stopped and started multiple times.
- * The disable is done here, including the shadow, so the
- * state is kept consistent.
- * See ipath_restart_sdma() for the actual starting of sdma.
+ * The disable is done here, including the shadow,
+ * so the state is kept consistent.
+ * See ipath_restart_sdma() for the actual starting
+ * of sdma.
*/
spin_lock_irqsave(&dd->ipath_sendctrl_lock, flags);
dd->ipath_sendctrl &= ~INFINIPATH_S_SDMAENABLE;
/* make sure I see next message */
dd->ipath_sdma_abort_jiffies = 0;
+ /*
+ * Not everything that takes SDMA offline is a link
+ * status change. If the link was up, restart SDMA.
+ */
+ if (dd->ipath_flags & IPATH_LINKACTIVE)
+ ipath_restart_sdma(dd);
+
goto done;
}
goto done;
}
- dd->ipath_sdma_status = 0;
+ /*
+ * Set initial status as if we had been up, then gone down.
+ * This lets initial start on transition to ACTIVE be the
+ * same as restart after link flap.
+ */
+ dd->ipath_sdma_status = IPATH_SDMA_ABORT_ABORTED;
dd->ipath_sdma_abort_jiffies = 0;
dd->ipath_sdma_generation = 0;
dd->ipath_sdma_descq_tail = 0;
ipath_write_kreg(dd, dd->ipath_kregs->kr_senddmaheadaddr,
dd->ipath_sdma_head_phys);
- /* Reserve all the former "kernel" piobufs */
- n = dd->ipath_piobcnt2k + dd->ipath_piobcnt4k - dd->ipath_pioreserved;
- for (i = dd->ipath_lastport_piobuf; i < n; ++i) {
+ /*
+ * Reserve all the former "kernel" piobufs, using high number range
+ * so we get as many 4K buffers as possible
+ */
+ n = dd->ipath_piobcnt2k + dd->ipath_piobcnt4k;
+ i = dd->ipath_lastport_piobuf + dd->ipath_pioreserved;
+ ipath_chg_pioavailkernel(dd, i, n - i , 0);
+ for (; i < n; ++i) {
unsigned word = i / 64;
unsigned bit = i & 63;
BUG_ON(word >= 3);
senddmabufmask[word] |= 1ULL << bit;
}
- ipath_chg_pioavailkernel(dd, dd->ipath_lastport_piobuf,
- n - dd->ipath_lastport_piobuf, 0);
ipath_write_kreg(dd, dd->ipath_kregs->kr_senddmabufmask0,
senddmabufmask[0]);
ipath_write_kreg(dd, dd->ipath_kregs->kr_senddmabufmask1,
ipath_read_kreg64(dd, dd->ipath_kregs->kr_scratch);
spin_unlock_irqrestore(&dd->ipath_sendctrl_lock, flags);
+ /* notify upper layers */
+ ipath_ib_piobufavail(dd->verbs_dev);
+
bail:
return;
}
wqe = get_swqe_ptr(qp, qp->s_head);
wqe->wr = *wr;
- wqe->ssn = qp->s_ssn++;
wqe->length = 0;
if (wr->num_sge) {
acc = wr->opcode >= IB_WR_RDMA_READ ?
goto bail_inval;
} else if (wqe->length > to_idev(qp->ibqp.device)->dd->ipath_ibmtu)
goto bail_inval;
+ wqe->ssn = qp->s_ssn++;
qp->s_head = next;
ret = 0;
config INPUT_PCSPKR
tristate "PC Speaker support"
- depends on ALPHA || X86 || MIPS || PPC_PREP || PPC_CHRP || PPC_PSERIES
+ depends on PCSPKR_PLATFORM
depends on SND_PCSP=n
help
Say Y here if you want the standard PC Speaker to be used for
#elif defined(__arm__)
/* defined in include/asm-arm/arch-xxx/irqs.h */
#include <asm/irq.h>
-#elif defined(CONFIG_SUPERH64)
+#elif defined(CONFIG_SH_CAYMAN)
#include <asm/irq.h>
#else
# define I8042_KBD_IRQ 1
*/
raid10_find_phys(conf, r10_bio);
retry_write:
- blocked_rdev = 0;
+ blocked_rdev = NULL;
rcu_read_lock();
for (i = 0; i < conf->copies; i++) {
int d = r10_bio->devs[i].devnum;
# Makefile for the kernel multimedia device drivers.
#
+obj-y := common/
+
obj-$(CONFIG_VIDEO_MEDIA) += common/
# Since hybrid devices are here, should be compiled if DVB and/or V4L
}
cx = kzalloc(sizeof(struct cx18), GFP_ATOMIC);
- if (cx == 0) {
+ if (!cx) {
spin_unlock(&cx18_cards_lock);
return -ENOMEM;
}
struct saa7134_fh *fh = priv;
struct saa7134_dev *dev = fh->dev;
int err;
- unsigned int flags;
+ unsigned long flags;
if (saa7134_no_overlay > 0) {
printk(KERN_ERR "V4L2_BUF_TYPE_VIDEO_OVERLAY: no_overlay\n");
return 0;
}
+static const struct i2c_device_id tcm825x_id[] = {
+ { "tcm825x", 0 },
+ { }
+};
+MODULE_DEVICE_TABLE(i2c, tcm825x_id);
+
static struct i2c_driver tcm825x_i2c_driver = {
.driver = {
.name = TCM825X_NAME,
},
.probe = tcm825x_probe,
.remove = __exit_p(tcm825x_remove),
+ .id_table = tcm825x_id,
};
static struct tcm825x_sensor tcm825x = {
/* ----------------------------------------------------------------------- */
+static const struct i2c_device_id tlv320aic23b_id[] = {
+ { "tlv320aic23b", 0 },
+ { }
+};
+MODULE_DEVICE_TABLE(i2c, tlv320aic23b_id);
static struct v4l2_i2c_driver_data v4l2_i2c_data = {
.name = "tlv320aic23b",
.command = tlv320aic23b_command,
.probe = tlv320aic23b_probe,
.remove = tlv320aic23b_remove,
+ .id_table = tlv320aic23b_id,
};
}
/* fill required data structures */
- strcpy(client->name, desc->name);
+ if (!id)
+ strlcpy(client->name, desc->name, I2C_NAME_SIZE);
chip->type = desc-chiplist;
chip->shadow.count = desc->registers+1;
chip->prevmode = -1;
return 0;
}
+/* This driver supports many devices and the idea is to let the driver
+ detect which device is present. So rather than listing all supported
+ devices here, we pretend to support a single, fake device type. */
+static const struct i2c_device_id chip_id[] = {
+ { "tvaudio", 0 },
+ { }
+};
+MODULE_DEVICE_TABLE(i2c, chip_id);
+
static struct v4l2_i2c_driver_data v4l2_i2c_data = {
.name = "tvaudio",
.driverid = I2C_DRIVERID_TVAUDIO,
.probe = chip_probe,
.remove = chip_remove,
.legacy_probe = chip_legacy_probe,
+ .id_table = chip_id,
};
/*
host->cclk = host->mclk;
} else {
clk = host->mclk / (2 * ios->clock) - 1;
- if (clk > 256)
+ if (clk >= 256)
clk = 255;
host->cclk = host->mclk / (2 * (clk + 1));
}
host->plat = plat;
host->mclk = clk_get_rate(host->clk);
+ /*
+ * According to the spec, mclk is max 100 MHz,
+ * so we try to adjust the clock down to this,
+ * (if possible).
+ */
+ if (host->mclk > 100000000) {
+ ret = clk_set_rate(host->clk, 100000000);
+ if (ret < 0)
+ goto clk_disable;
+ host->mclk = clk_get_rate(host->clk);
+ DBG(host, "eventual mclk rate: %u Hz\n", host->mclk);
+ }
host->mmc = mmc;
host->base = ioremap(dev->res.start, SZ_4K);
if (!host->base) {
config MTD_SOLUTIONENGINE
tristate "CFI Flash device mapped on Hitachi SolutionEngine"
- depends on SUPERH && MTD_CFI && MTD_REDBOOT_PARTS
+ depends on SUPERH && SOLUTION_ENGINE && MTD_CFI && MTD_REDBOOT_PARTS
help
This enables access to the flash chips on the Hitachi SolutionEngine and
similar boards. Say 'Y' if you are building a kernel for such a board.
This enables access to the flash chips on the Hynix evaluation boards.
If you have such a board, say 'Y'.
-config MTD_MPC1211
- tristate "CFI Flash device mapped on Interface MPC-1211"
- depends on SH_MPC1211 && MTD_CFI
- help
- This enables access to the flash chips on the Interface MPC-1211(CTP/PCI/MPC-SH02).
- If you have such a board, say 'Y'.
-
config MTD_OMAP_NOR
tristate "TI OMAP board mappings"
depends on MTD_CFI && ARCH_OMAP
obj-$(CONFIG_MTD_H720X) += h720x-flash.o
obj-$(CONFIG_MTD_SBC8240) += sbc8240.o
obj-$(CONFIG_MTD_NOR_TOTO) += omap-toto-flash.o
-obj-$(CONFIG_MTD_MPC1211) += mpc1211.o
obj-$(CONFIG_MTD_IXP4XX) += ixp4xx.o
obj-$(CONFIG_MTD_IXP2000) += ixp2000.o
obj-$(CONFIG_MTD_WRSBC8260) += wr_sbc82xx_flash.o
+++ /dev/null
-/*
- * Flash on MPC-1211
- *
- * $Id: mpc1211.c,v 1.4 2004/09/16 23:27:13 gleixner Exp $
- *
- * (C) 2002 Interface, Saito.K & Jeanne
- *
- * GPL'd
- */
-
-#include <linux/module.h>
-#include <linux/types.h>
-#include <linux/kernel.h>
-#include <asm/io.h>
-#include <linux/mtd/mtd.h>
-#include <linux/mtd/map.h>
-#include <linux/mtd/partitions.h>
-
-static struct mtd_info *flash_mtd;
-static struct mtd_partition *parsed_parts;
-
-struct map_info mpc1211_flash_map = {
- .name = "MPC-1211 FLASH",
- .size = 0x80000,
- .bankwidth = 1,
-};
-
-static struct mtd_partition mpc1211_partitions[] = {
- {
- .name = "IPL & ETH-BOOT",
- .offset = 0x00000000,
- .size = 0x10000,
- },
- {
- .name = "Flash FS",
- .offset = 0x00010000,
- .size = MTDPART_SIZ_FULL,
- }
-};
-
-static int __init init_mpc1211_maps(void)
-{
- int nr_parts;
-
- mpc1211_flash_map.phys = 0;
- mpc1211_flash_map.virt = (void __iomem *)P2SEGADDR(0);
-
- simple_map_init(&mpc1211_flash_map);
-
- printk(KERN_NOTICE "Probing for flash chips at 0x00000000:\n");
- flash_mtd = do_map_probe("jedec_probe", &mpc1211_flash_map);
- if (!flash_mtd) {
- printk(KERN_NOTICE "Flash chips not detected at either possible location.\n");
- return -ENXIO;
- }
- printk(KERN_NOTICE "MPC-1211: Flash at 0x%08lx\n", mpc1211_flash_map.virt & 0x1fffffff);
- flash_mtd->module = THIS_MODULE;
-
- parsed_parts = mpc1211_partitions;
- nr_parts = ARRAY_SIZE(mpc1211_partitions);
-
- add_mtd_partitions(flash_mtd, parsed_parts, nr_parts);
- return 0;
-}
-
-static void __exit cleanup_mpc1211_maps(void)
-{
- if (parsed_parts)
- del_mtd_partitions(flash_mtd);
- else
- del_mtd_device(flash_mtd);
- map_destroy(flash_mtd);
-}
-
-module_init(init_mpc1211_maps);
-module_exit(cleanup_mpc1211_maps);
-
-MODULE_LICENSE("GPL");
-MODULE_AUTHOR("Saito.K & Jeanne <ksaito@interface.co.jp>");
-MODULE_DESCRIPTION("MTD map driver for MPC-1211 boards. Interface");
{"3c920B-EMB-WNM (ATI Radeon 9100 IGP)",
PCI_USES_MASTER, IS_TORNADO|HAS_MII|HAS_HWCKSM, 128, },
{"3c980 Cyclone",
- PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM, 128, },
+ PCI_USES_MASTER, IS_CYCLONE|HAS_HWCKSM|EXTRA_PREAMBLE, 128, },
{"3c980C Python-T",
PCI_USES_MASTER, IS_CYCLONE|HAS_NWAY|HAS_HWCKSM, 128, },
struct sk_buff* tx_skbuff[TX_RING_SIZE];
unsigned int cur_rx, cur_tx; /* The next free ring entry */
unsigned int dirty_rx, dirty_tx; /* The ring entries to be free()ed. */
- struct net_device_stats stats; /* Generic stats */
struct vortex_extra_stats xstats; /* NIC-specific extra stats */
struct sk_buff *tx_skb; /* Packet being eaten by bus master ctrl. */
dma_addr_t tx_skb_dma; /* Allocated DMA address for bus master ctrl DMA. */
issue_and_wait(dev, TxReset);
- vp->stats.tx_errors++;
+ dev->stats.tx_errors++;
if (vp->full_bus_master_tx) {
printk(KERN_DEBUG "%s: Resetting the Tx ring pointer.\n", dev->name);
if (vp->cur_tx - vp->dirty_tx > 0 && ioread32(ioaddr + DownListPtr) == 0)
iowrite8(PKT_BUF_SZ>>8, ioaddr + TxFreeThreshold);
iowrite16(DownUnstall, ioaddr + EL3_CMD);
} else {
- vp->stats.tx_dropped++;
+ dev->stats.tx_dropped++;
netif_wake_queue(dev);
}
}
dump_tx_ring(dev);
}
- if (tx_status & 0x14) vp->stats.tx_fifo_errors++;
- if (tx_status & 0x38) vp->stats.tx_aborted_errors++;
+ if (tx_status & 0x14) dev->stats.tx_fifo_errors++;
+ if (tx_status & 0x38) dev->stats.tx_aborted_errors++;
if (tx_status & 0x08) vp->xstats.tx_max_collisions++;
iowrite8(0, ioaddr + TxStatus);
if (tx_status & 0x30) { /* txJabber or txUnderrun */
if (vortex_debug > 2)
printk(KERN_DEBUG "%s: Tx error, status %2.2x.\n",
dev->name, tx_status);
- if (tx_status & 0x04) vp->stats.tx_fifo_errors++;
- if (tx_status & 0x38) vp->stats.tx_aborted_errors++;
+ if (tx_status & 0x04) dev->stats.tx_fifo_errors++;
+ if (tx_status & 0x38) dev->stats.tx_aborted_errors++;
if (tx_status & 0x30) {
issue_and_wait(dev, TxReset);
}
} else {
printk(KERN_DEBUG "boomerang_interrupt: no skb!\n");
}
- /* vp->stats.tx_packets++; Counted below. */
+ /* dev->stats.tx_packets++; Counted below. */
dirty_tx++;
}
vp->dirty_tx = dirty_tx;
unsigned char rx_error = ioread8(ioaddr + RxErrors);
if (vortex_debug > 2)
printk(KERN_DEBUG " Rx error: status %2.2x.\n", rx_error);
- vp->stats.rx_errors++;
- if (rx_error & 0x01) vp->stats.rx_over_errors++;
- if (rx_error & 0x02) vp->stats.rx_length_errors++;
- if (rx_error & 0x04) vp->stats.rx_frame_errors++;
- if (rx_error & 0x08) vp->stats.rx_crc_errors++;
- if (rx_error & 0x10) vp->stats.rx_length_errors++;
+ dev->stats.rx_errors++;
+ if (rx_error & 0x01) dev->stats.rx_over_errors++;
+ if (rx_error & 0x02) dev->stats.rx_length_errors++;
+ if (rx_error & 0x04) dev->stats.rx_frame_errors++;
+ if (rx_error & 0x08) dev->stats.rx_crc_errors++;
+ if (rx_error & 0x10) dev->stats.rx_length_errors++;
} else {
/* The packet length: up to 4.5K!. */
int pkt_len = rx_status & 0x1fff;
skb->protocol = eth_type_trans(skb, dev);
netif_rx(skb);
dev->last_rx = jiffies;
- vp->stats.rx_packets++;
+ dev->stats.rx_packets++;
/* Wait a limited time to go to next packet. */
for (i = 200; i >= 0; i--)
if ( ! (ioread16(ioaddr + EL3_STATUS) & CmdInProgress))
} else if (vortex_debug > 0)
printk(KERN_NOTICE "%s: No memory to allocate a sk_buff of "
"size %d.\n", dev->name, pkt_len);
- vp->stats.rx_dropped++;
+ dev->stats.rx_dropped++;
}
issue_and_wait(dev, RxDiscard);
}
unsigned char rx_error = rx_status >> 16;
if (vortex_debug > 2)
printk(KERN_DEBUG " Rx error: status %2.2x.\n", rx_error);
- vp->stats.rx_errors++;
- if (rx_error & 0x01) vp->stats.rx_over_errors++;
- if (rx_error & 0x02) vp->stats.rx_length_errors++;
- if (rx_error & 0x04) vp->stats.rx_frame_errors++;
- if (rx_error & 0x08) vp->stats.rx_crc_errors++;
- if (rx_error & 0x10) vp->stats.rx_length_errors++;
+ dev->stats.rx_errors++;
+ if (rx_error & 0x01) dev->stats.rx_over_errors++;
+ if (rx_error & 0x02) dev->stats.rx_length_errors++;
+ if (rx_error & 0x04) dev->stats.rx_frame_errors++;
+ if (rx_error & 0x08) dev->stats.rx_crc_errors++;
+ if (rx_error & 0x10) dev->stats.rx_length_errors++;
} else {
/* The packet length: up to 4.5K!. */
int pkt_len = rx_status & 0x1fff;
}
netif_rx(skb);
dev->last_rx = jiffies;
- vp->stats.rx_packets++;
+ dev->stats.rx_packets++;
}
entry = (++vp->cur_rx) % RX_RING_SIZE;
}
del_timer_sync(&vp->rx_oom_timer);
del_timer_sync(&vp->timer);
- /* Turn off statistics ASAP. We update vp->stats below. */
+ /* Turn off statistics ASAP. We update dev->stats below. */
iowrite16(StatsDisable, ioaddr + EL3_CMD);
/* Disable the receiver and transmitter. */
update_stats(ioaddr, dev);
spin_unlock_irqrestore (&vp->lock, flags);
}
- return &vp->stats;
+ return &dev->stats;
}
/* Update statistics.
/* Unlike the 3c5x9 we need not turn off stats updates while reading. */
/* Switch to the stats window, and read everything. */
EL3WINDOW(6);
- vp->stats.tx_carrier_errors += ioread8(ioaddr + 0);
- vp->stats.tx_heartbeat_errors += ioread8(ioaddr + 1);
- vp->stats.tx_window_errors += ioread8(ioaddr + 4);
- vp->stats.rx_fifo_errors += ioread8(ioaddr + 5);
- vp->stats.tx_packets += ioread8(ioaddr + 6);
- vp->stats.tx_packets += (ioread8(ioaddr + 9)&0x30) << 4;
+ dev->stats.tx_carrier_errors += ioread8(ioaddr + 0);
+ dev->stats.tx_heartbeat_errors += ioread8(ioaddr + 1);
+ dev->stats.tx_window_errors += ioread8(ioaddr + 4);
+ dev->stats.rx_fifo_errors += ioread8(ioaddr + 5);
+ dev->stats.tx_packets += ioread8(ioaddr + 6);
+ dev->stats.tx_packets += (ioread8(ioaddr + 9)&0x30) << 4;
/* Rx packets */ ioread8(ioaddr + 7); /* Must read to clear */
/* Don't bother with register 9, an extension of registers 6&7.
If we do use the 6&7 values the atomic update assumption above
is invalid. */
- vp->stats.rx_bytes += ioread16(ioaddr + 10);
- vp->stats.tx_bytes += ioread16(ioaddr + 12);
+ dev->stats.rx_bytes += ioread16(ioaddr + 10);
+ dev->stats.tx_bytes += ioread16(ioaddr + 12);
/* Extra stats for get_ethtool_stats() */
vp->xstats.tx_multiple_collisions += ioread8(ioaddr + 2);
vp->xstats.tx_single_collisions += ioread8(ioaddr + 3);
EL3WINDOW(4);
vp->xstats.rx_bad_ssd += ioread8(ioaddr + 12);
- vp->stats.collisions = vp->xstats.tx_multiple_collisions
+ dev->stats.collisions = vp->xstats.tx_multiple_collisions
+ vp->xstats.tx_single_collisions
+ vp->xstats.tx_max_collisions;
{
u8 up = ioread8(ioaddr + 13);
- vp->stats.rx_bytes += (up & 0x0f) << 16;
- vp->stats.tx_bytes += (up & 0xf0) << 12;
+ dev->stats.rx_bytes += (up & 0x0f) << 16;
+ dev->stats.tx_bytes += (up & 0xf0) << 12;
}
EL3WINDOW(old_window >> 13);
To compile this driver as a module, choose M here. The module
will be called pcnet32.
-config PCNET32_NAPI
- bool "Use RX polling (NAPI)"
- depends on PCNET32
- help
- NAPI is a new driver API designed to reduce CPU and interrupt load
- when the driver is receiving lots of packets from the card. It is
- still somewhat experimental and thus not yet enabled by default.
-
- If your estimated Rx load is 10kpps or more, or if the card will be
- deployed on potentially unfriendly networks (e.g. in a firewall),
- then say Y here.
-
- If in doubt, say N.
-
config AMD8111_ETH
tristate "AMD 8111 (new PCI lance) support"
depends on NET_PCI && PCI
{
outb(0, ioaddr+DAYNA_RESET); /* Assert the reset port */
inb(ioaddr+DAYNA_RESET); /* Clear the reset */
- if(sleep)
- {
- long snap=jiffies;
-
- /* Let card finish initializing, about 1/3 second */
- while (time_before(jiffies, snap + HZ/3))
- schedule();
- }
- else
- mdelay(333);
+ if (sleep)
+ msleep(333);
+ else
+ mdelay(333);
}
+
netif_wake_queue(dev);
- return;
}
static void cops_load (struct net_device *dev)
res = netdev_set_master(slave_dev, bond_dev);
if (res) {
dprintk("Error %d calling netdev_set_master\n", res);
- goto err_close;
+ goto err_restore_mac;
}
/* open the slave since the application closed it */
res = dev_open(slave_dev);
if (res) {
dprintk("Openning slave %s failed\n", slave_dev->name);
- goto err_restore_mac;
+ goto err_unset_master;
}
new_slave->dev = slave_dev;
*/
res = bond_alb_init_slave(bond, new_slave);
if (res) {
- goto err_unset_master;
+ goto err_close;
}
}
res = bond_create_slave_symlinks(bond_dev, slave_dev);
if (res)
- goto err_unset_master;
+ goto err_close;
printk(KERN_INFO DRV_NAME
": %s: enslaving %s as a%s interface with a%s link.\n",
return 0;
/* Undo stages on error */
-err_unset_master:
- netdev_set_master(slave_dev, NULL);
-
err_close:
dev_close(slave_dev);
+err_unset_master:
+ netdev_set_master(slave_dev, NULL);
+
err_restore_mac:
if (!bond->params.fail_over_mac) {
memcpy(addr.sa_data, new_slave->perm_hwaddr, ETH_ALEN);
if (res < 0) {
rtnl_lock();
down_write(&bonding_rwsem);
- goto out_bond;
+ bond_deinit(bond_dev);
+ unregister_netdevice(bond_dev);
+ goto out_rtnl;
}
return 0;
destroy_workqueue(bond->wq);
}
+ bond_destroy_sysfs();
+
rtnl_lock();
bond_free_all();
- bond_destroy_sysfs();
rtnl_unlock();
out:
return res;
unregister_netdevice_notifier(&bond_netdev_notifier);
unregister_inetaddr_notifier(&bond_inetaddr_notifier);
+ bond_destroy_sysfs();
+
rtnl_lock();
bond_free_all();
- bond_destroy_sysfs();
rtnl_unlock();
}
": Unable remove bond %s due to open references.\n",
ifname);
res = -EPERM;
- goto out;
+ goto out_unlock;
}
printk(KERN_INFO DRV_NAME
": %s is being deleted...\n",
bond->dev->name);
bond_destroy(bond);
- up_write(&bonding_rwsem);
- rtnl_unlock();
- goto out;
+ goto out_unlock;
}
printk(KERN_ERR DRV_NAME
": unable to delete non-existent bond %s\n", ifname);
res = -ENODEV;
- up_write(&bonding_rwsem);
- rtnl_unlock();
- goto out;
+ goto out_unlock;
}
err_no_cmd:
printk(KERN_ERR DRV_NAME
": no command found in bonding_masters. Use +ifname or -ifname.\n");
- res = -EPERM;
+ return -EPERM;
+
+out_unlock:
+ up_write(&bonding_rwsem);
+ rtnl_unlock();
/* Always return either count or an error. If you return 0, you'll
* get called forever, which is bad.
u8 *fw_data;
struct ch_mem_range t;
- if (!capable(CAP_NET_ADMIN))
+ if (!capable(CAP_SYS_RAWIO))
return -EPERM;
if (copy_from_user(&t, useraddr, sizeof(t)))
return -EFAULT;
-
+ /* Check t.len sanity ? */
fw_data = kmalloc(t.len, GFP_KERNEL);
if (!fw_data)
return -ENOMEM;
#define IFE_E_PHY_ID 0x02A80330
#define IFE_PLUS_E_PHY_ID 0x02A80320
#define IFE_C_E_PHY_ID 0x02A80310
+#define BME1000_E_PHY_ID 0x01410CB0
+#define BME1000_E_PHY_ID_R2 0x01410CB1
/* M88E1000 Specific Registers */
#define M88E1000_PHY_SPEC_CTRL 0x10 /* PHY Specific Control Register */
#define M88EC018_EPSCR_DOWNSHIFT_COUNTER_MASK 0x0E00
#define M88EC018_EPSCR_DOWNSHIFT_COUNTER_5X 0x0800
+/* BME1000 PHY Specific Control Register */
+#define BME1000_PSCR_ENABLE_DOWNSHIFT 0x0800 /* 1 = enable downshift */
+
+
+#define PHY_PAGE_SHIFT 5
+#define PHY_REG(page, reg) (((page) << PHY_PAGE_SHIFT) | \
+ ((reg) & MAX_PHY_REG_ADDRESS))
+
/*
* Bits...
* 15-5: page
/* arrays of page information for packet split */
struct e1000_ps_page *ps_pages;
};
-
+ struct page *page;
};
struct e1000_ring {
#define FLAG_HAS_CTRLEXT_ON_LOAD (1 << 5)
#define FLAG_HAS_SWSM_ON_LOAD (1 << 6)
#define FLAG_HAS_JUMBO_FRAMES (1 << 7)
+#define FLAG_IS_ICH (1 << 9)
#define FLAG_HAS_SMART_POWER_DOWN (1 << 11)
#define FLAG_IS_QUAD_PORT_A (1 << 12)
#define FLAG_IS_QUAD_PORT (1 << 13)
bool state);
extern void e1000e_igp3_phy_powerdown_workaround_ich8lan(struct e1000_hw *hw);
extern void e1000e_gig_downshift_workaround_ich8lan(struct e1000_hw *hw);
+extern void e1000e_disable_gig_wol_ich8lan(struct e1000_hw *hw);
extern s32 e1000e_check_for_copper_link(struct e1000_hw *hw);
extern s32 e1000e_check_for_fiber_link(struct e1000_hw *hw);
extern s32 e1000e_read_phy_reg_m88(struct e1000_hw *hw, u32 offset, u16 *data);
extern s32 e1000e_write_phy_reg_m88(struct e1000_hw *hw, u32 offset, u16 data);
extern enum e1000_phy_type e1000e_get_phy_type_from_id(u32 phy_id);
+extern s32 e1000e_determine_phy_address(struct e1000_hw *hw);
+extern s32 e1000e_write_phy_reg_bm(struct e1000_hw *hw, u32 offset, u16 data);
+extern s32 e1000e_read_phy_reg_bm(struct e1000_hw *hw, u32 offset, u16 *data);
extern void e1000e_phy_force_speed_duplex_setup(struct e1000_hw *hw, u16 *phy_ctrl);
extern s32 e1000e_write_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 data);
extern s32 e1000e_read_kmrn_reg(struct e1000_hw *hw, u32 offset, u16 *data);
for (i = 0; i < last_word - first_word + 1; i++) {
ret_val = e1000_read_nvm(hw, first_word + i, 1,
&eeprom_buff[i]);
- if (ret_val)
+ if (ret_val) {
+ /* a read error occurred, throw away the
+ * result */
+ memset(eeprom_buff, 0xff, sizeof(eeprom_buff));
break;
+ }
}
}
/* restore previous status */
ew32(STATUS, before);
- if ((mac->type != e1000_ich8lan) &&
- (mac->type != e1000_ich9lan)) {
+ if (!(adapter->flags & FLAG_IS_ICH)) {
REG_PATTERN_TEST(E1000_FCAL, 0xFFFFFFFF, 0xFFFFFFFF);
REG_PATTERN_TEST(E1000_FCAH, 0x0000FFFF, 0xFFFFFFFF);
REG_PATTERN_TEST(E1000_FCT, 0x0000FFFF, 0xFFFFFFFF);
REG_SET_AND_CHECK(E1000_RCTL, 0xFFFFFFFF, 0x00000000);
- before = (((mac->type == e1000_ich8lan) ||
- (mac->type == e1000_ich9lan)) ? 0x06C3B33E : 0x06DFB3FE);
+ before = ((adapter->flags & FLAG_IS_ICH) ? 0x06C3B33E : 0x06DFB3FE);
REG_SET_AND_CHECK(E1000_RCTL, before, 0x003FFFFB);
REG_SET_AND_CHECK(E1000_TCTL, 0xFFFFFFFF, 0x00000000);
REG_SET_AND_CHECK(E1000_RCTL, before, 0xFFFFFFFF);
REG_PATTERN_TEST(E1000_RDBAL, 0xFFFFFFF0, 0xFFFFFFFF);
- if ((mac->type != e1000_ich8lan) &&
- (mac->type != e1000_ich9lan))
+ if (!(adapter->flags & FLAG_IS_ICH))
REG_PATTERN_TEST(E1000_TXCW, 0xC000FFFF, 0x0000FFFF);
REG_PATTERN_TEST(E1000_TDBAL, 0xFFFFFFF0, 0xFFFFFFFF);
REG_PATTERN_TEST(E1000_TIDV, 0x0000FFFF, 0x0000FFFF);
/* Test each interrupt */
for (i = 0; i < 10; i++) {
-
- if (((adapter->hw.mac.type == e1000_ich8lan) ||
- (adapter->hw.mac.type == e1000_ich9lan)) && i == 8)
+ if ((adapter->flags & FLAG_IS_ICH) && (i == 8))
continue;
/* Interrupt to test */
struct e1000_hw *hw = &adapter->hw;
u32 ctrl_reg = 0;
u32 stat_reg = 0;
+ u16 phy_reg = 0;
hw->mac.autoneg = 0;
E1000_CTRL_SPD_100 |/* Force Speed to 100 */
E1000_CTRL_FD); /* Force Duplex to FULL */
break;
+ case e1000_phy_bm:
+ /* Set Default MAC Interface speed to 1GB */
+ e1e_rphy(hw, PHY_REG(2, 21), &phy_reg);
+ phy_reg &= ~0x0007;
+ phy_reg |= 0x006;
+ e1e_wphy(hw, PHY_REG(2, 21), phy_reg);
+ /* Assert SW reset for above settings to take effect */
+ e1000e_commit_phy(hw);
+ mdelay(1);
+ /* Force Full Duplex */
+ e1e_rphy(hw, PHY_REG(769, 16), &phy_reg);
+ e1e_wphy(hw, PHY_REG(769, 16), phy_reg | 0x000C);
+ /* Set Link Up (in force link) */
+ e1e_rphy(hw, PHY_REG(776, 16), &phy_reg);
+ e1e_wphy(hw, PHY_REG(776, 16), phy_reg | 0x0040);
+ /* Force Link */
+ e1e_rphy(hw, PHY_REG(769, 16), &phy_reg);
+ e1e_wphy(hw, PHY_REG(769, 16), phy_reg | 0x0040);
+ /* Set Early Link Enable */
+ e1e_rphy(hw, PHY_REG(769, 20), &phy_reg);
+ e1e_wphy(hw, PHY_REG(769, 20), phy_reg | 0x0400);
+ /* fall through */
default:
/* force 1000, set loopback */
e1e_wphy(hw, PHY_CONTROL, 0x4140);
E1000_CTRL_SPD_1000 |/* Force Speed to 1000 */
E1000_CTRL_FD); /* Force Duplex to FULL */
- if ((adapter->hw.mac.type == e1000_ich8lan) ||
- (adapter->hw.mac.type == e1000_ich9lan))
+ if (adapter->flags & FLAG_IS_ICH)
ctrl_reg |= E1000_CTRL_SLU; /* Set Link Up */
}
#define IGP01E1000_PHY_LINK_HEALTH 0x13 /* PHY Link Health */
#define IGP02E1000_PHY_POWER_MGMT 0x19 /* Power Management */
#define IGP01E1000_PHY_PAGE_SELECT 0x1F /* Page Select */
+#define BM_PHY_PAGE_SELECT 22 /* Page Select for BM */
+#define IGP_PAGE_SHIFT 5
+#define PHY_REG_MASK 0x1F
+
+#define BM_WUC_PAGE 800
+#define BM_WUC_ADDRESS_OPCODE 0x11
+#define BM_WUC_DATA_OPCODE 0x12
+#define BM_WUC_ENABLE_PAGE 769
+#define BM_WUC_ENABLE_REG 17
+#define BM_WUC_ENABLE_BIT (1 << 2)
+#define BM_WUC_HOST_WU_BIT (1 << 4)
+
+#define BM_WUC PHY_REG(BM_WUC_PAGE, 1)
+#define BM_WUFC PHY_REG(BM_WUC_PAGE, 2)
+#define BM_WUS PHY_REG(BM_WUC_PAGE, 3)
#define IGP01E1000_PHY_PCS_INIT_REG 0x00B4
#define IGP01E1000_PHY_POLARITY_MASK 0x0078
#define E1000_DEV_ID_ICH8_IFE_G 0x10C5
#define E1000_DEV_ID_ICH8_IGP_M 0x104D
#define E1000_DEV_ID_ICH9_IGP_AMT 0x10BD
+#define E1000_DEV_ID_ICH9_IGP_M_AMT 0x10F5
+#define E1000_DEV_ID_ICH9_IGP_M 0x10BF
+#define E1000_DEV_ID_ICH9_IGP_M_V 0x10CB
#define E1000_DEV_ID_ICH9_IGP_C 0x294C
#define E1000_DEV_ID_ICH9_IFE 0x10C0
#define E1000_DEV_ID_ICH9_IFE_GT 0x10C3
#define E1000_DEV_ID_ICH9_IFE_G 0x10C2
+#define E1000_DEV_ID_ICH10_R_BM_LM 0x10CC
+#define E1000_DEV_ID_ICH10_R_BM_LF 0x10CD
+#define E1000_DEV_ID_ICH10_R_BM_V 0x10CE
#define E1000_FUNC_1 1
e1000_phy_gg82563,
e1000_phy_igp_3,
e1000_phy_ife,
+ e1000_phy_bm,
};
enum e1000_bus_width {
* 82566DM Gigabit Network Connection
* 82566MC Gigabit Network Connection
* 82566MM Gigabit Network Connection
+ * 82567LM Gigabit Network Connection
+ * 82567LF Gigabit Network Connection
+ * 82567LM-2 Gigabit Network Connection
+ * 82567LF-2 Gigabit Network Connection
+ * 82567V-2 Gigabit Network Connection
+ * 82562GT-3 10/100 Network Connection
*/
#include <linux/netdevice.h>
phy->addr = 1;
phy->reset_delay_us = 100;
+ /*
+ * We may need to do this twice - once for IGP and if that fails,
+ * we'll set BM func pointers and try again
+ */
+ ret_val = e1000e_determine_phy_address(hw);
+ if (ret_val) {
+ hw->phy.ops.write_phy_reg = e1000e_write_phy_reg_bm;
+ hw->phy.ops.read_phy_reg = e1000e_read_phy_reg_bm;
+ ret_val = e1000e_determine_phy_address(hw);
+ if (ret_val)
+ return ret_val;
+ }
+
phy->id = 0;
while ((e1000_phy_unknown == e1000e_get_phy_type_from_id(phy->id)) &&
(i++ < 100)) {
phy->type = e1000_phy_ife;
phy->autoneg_mask = E1000_ALL_NOT_GIG;
break;
+ case BME1000_E_PHY_ID:
+ phy->type = e1000_phy_bm;
+ phy->autoneg_mask = AUTONEG_ADVERTISE_SPEED_DEFAULT;
+ hw->phy.ops.read_phy_reg = e1000e_read_phy_reg_bm;
+ hw->phy.ops.write_phy_reg = e1000e_write_phy_reg_bm;
+ hw->phy.ops.commit_phy = e1000e_phy_sw_reset;
+ break;
default:
return -E1000_ERR_PHY;
break;
return e1000_get_phy_info_ife_ich8lan(hw);
break;
case e1000_phy_igp_3:
+ case e1000_phy_bm:
return e1000e_get_phy_info_igp(hw);
break;
default:
s32 ret_val = 0;
u16 data;
- if (phy->type != e1000_phy_igp_3)
+ if (phy->type == e1000_phy_ife)
return ret_val;
phy_ctrl = er32(PHY_CTRL);
ret_val = e1000e_copper_link_setup_igp(hw);
if (ret_val)
return ret_val;
+ } else if (hw->phy.type == e1000_phy_bm) {
+ ret_val = e1000e_copper_link_setup_m88(hw);
+ if (ret_val)
+ return ret_val;
}
+ if (hw->phy.type == e1000_phy_ife) {
+ ret_val = e1e_rphy(hw, IFE_PHY_MDIX_CONTROL, ®_data);
+ if (ret_val)
+ return ret_val;
+
+ reg_data &= ~IFE_PMC_AUTO_MDIX;
+
+ switch (hw->phy.mdix) {
+ case 1:
+ reg_data &= ~IFE_PMC_FORCE_MDIX;
+ break;
+ case 2:
+ reg_data |= IFE_PMC_FORCE_MDIX;
+ break;
+ case 0:
+ default:
+ reg_data |= IFE_PMC_AUTO_MDIX;
+ break;
+ }
+ ret_val = e1e_wphy(hw, IFE_PHY_MDIX_CONTROL, reg_data);
+ if (ret_val)
+ return ret_val;
+ }
return e1000e_setup_copper_link(hw);
}
reg_data);
}
+/**
+ * e1000e_disable_gig_wol_ich8lan - disable gig during WoL
+ * @hw: pointer to the HW structure
+ *
+ * During S0 to Sx transition, it is possible the link remains at gig
+ * instead of negotiating to a lower speed. Before going to Sx, set
+ * 'LPLU Enabled' and 'Gig Disable' to force link speed negotiation
+ * to a lower speed.
+ *
+ * Should only be called for ICH9 devices.
+ **/
+void e1000e_disable_gig_wol_ich8lan(struct e1000_hw *hw)
+{
+ u32 phy_ctrl;
+
+ if (hw->mac.type == e1000_ich9lan) {
+ phy_ctrl = er32(PHY_CTRL);
+ phy_ctrl |= E1000_PHY_CTRL_D0A_LPLU |
+ E1000_PHY_CTRL_GBE_DISABLE;
+ ew32(PHY_CTRL, phy_ctrl);
+ }
+
+ return;
+}
+
/**
* e1000_cleanup_led_ich8lan - Restore the default LED operation
* @hw: pointer to the HW structure
struct e1000_info e1000_ich8_info = {
.mac = e1000_ich8lan,
.flags = FLAG_HAS_WOL
+ | FLAG_IS_ICH
| FLAG_RX_CSUM_ENABLED
| FLAG_HAS_CTRLEXT_ON_LOAD
| FLAG_HAS_AMT
struct e1000_info e1000_ich9_info = {
.mac = e1000_ich9lan,
.flags = FLAG_HAS_JUMBO_FRAMES
+ | FLAG_IS_ICH
| FLAG_HAS_WOL
| FLAG_RX_CSUM_ENABLED
| FLAG_HAS_CTRLEXT_ON_LOAD
#include <linux/if_vlan.h>
#include <linux/cpu.h>
#include <linux/smp.h>
+#include <linux/pm_qos_params.h>
#include "e1000.h"
-#define DRV_VERSION "0.2.1"
+#define DRV_VERSION "0.3.3.3-k2"
char e1000e_driver_name[] = "e1000e";
const char e1000e_driver_version[] = DRV_VERSION;
}
}
+/**
+ * e1000_alloc_jumbo_rx_buffers - Replace used jumbo receive buffers
+ * @adapter: address of board private structure
+ * @rx_ring: pointer to receive ring structure
+ * @cleaned_count: number of buffers to allocate this pass
+ **/
+
+static void e1000_alloc_jumbo_rx_buffers(struct e1000_adapter *adapter,
+ int cleaned_count)
+{
+ struct net_device *netdev = adapter->netdev;
+ struct pci_dev *pdev = adapter->pdev;
+ struct e1000_rx_desc *rx_desc;
+ struct e1000_ring *rx_ring = adapter->rx_ring;
+ struct e1000_buffer *buffer_info;
+ struct sk_buff *skb;
+ unsigned int i;
+ unsigned int bufsz = 256 -
+ 16 /* for skb_reserve */ -
+ NET_IP_ALIGN;
+
+ i = rx_ring->next_to_use;
+ buffer_info = &rx_ring->buffer_info[i];
+
+ while (cleaned_count--) {
+ skb = buffer_info->skb;
+ if (skb) {
+ skb_trim(skb, 0);
+ goto check_page;
+ }
+
+ skb = netdev_alloc_skb(netdev, bufsz);
+ if (unlikely(!skb)) {
+ /* Better luck next round */
+ adapter->alloc_rx_buff_failed++;
+ break;
+ }
+
+ /* Make buffer alignment 2 beyond a 16 byte boundary
+ * this will result in a 16 byte aligned IP header after
+ * the 14 byte MAC header is removed
+ */
+ skb_reserve(skb, NET_IP_ALIGN);
+
+ buffer_info->skb = skb;
+check_page:
+ /* allocate a new page if necessary */
+ if (!buffer_info->page) {
+ buffer_info->page = alloc_page(GFP_ATOMIC);
+ if (unlikely(!buffer_info->page)) {
+ adapter->alloc_rx_buff_failed++;
+ break;
+ }
+ }
+
+ if (!buffer_info->dma)
+ buffer_info->dma = pci_map_page(pdev,
+ buffer_info->page, 0,
+ PAGE_SIZE,
+ PCI_DMA_FROMDEVICE);
+
+ rx_desc = E1000_RX_DESC(*rx_ring, i);
+ rx_desc->buffer_addr = cpu_to_le64(buffer_info->dma);
+
+ if (unlikely(++i == rx_ring->count))
+ i = 0;
+ buffer_info = &rx_ring->buffer_info[i];
+ }
+
+ if (likely(rx_ring->next_to_use != i)) {
+ rx_ring->next_to_use = i;
+ if (unlikely(i-- == 0))
+ i = (rx_ring->count - 1);
+
+ /* Force memory writes to complete before letting h/w
+ * know there are new descriptors to fetch. (Only
+ * applicable for weak-ordered memory model archs,
+ * such as IA-64). */
+ wmb();
+ writel(i, adapter->hw.hw_addr + rx_ring->tail);
+ }
+}
+
/**
* e1000_clean_rx_irq - Send received data up the network stack; legacy
* @adapter: board private structure
return cleaned;
}
+/**
+ * e1000_consume_page - helper function
+ **/
+static void e1000_consume_page(struct e1000_buffer *bi, struct sk_buff *skb,
+ u16 length)
+{
+ bi->page = NULL;
+ skb->len += length;
+ skb->data_len += length;
+ skb->truesize += length;
+}
+
+/**
+ * e1000_clean_jumbo_rx_irq - Send received data up the network stack; legacy
+ * @adapter: board private structure
+ *
+ * the return value indicates whether actual cleaning was done, there
+ * is no guarantee that everything was cleaned
+ **/
+
+static bool e1000_clean_jumbo_rx_irq(struct e1000_adapter *adapter,
+ int *work_done, int work_to_do)
+{
+ struct net_device *netdev = adapter->netdev;
+ struct pci_dev *pdev = adapter->pdev;
+ struct e1000_ring *rx_ring = adapter->rx_ring;
+ struct e1000_rx_desc *rx_desc, *next_rxd;
+ struct e1000_buffer *buffer_info, *next_buffer;
+ u32 length;
+ unsigned int i;
+ int cleaned_count = 0;
+ bool cleaned = false;
+ unsigned int total_rx_bytes=0, total_rx_packets=0;
+
+ i = rx_ring->next_to_clean;
+ rx_desc = E1000_RX_DESC(*rx_ring, i);
+ buffer_info = &rx_ring->buffer_info[i];
+
+ while (rx_desc->status & E1000_RXD_STAT_DD) {
+ struct sk_buff *skb;
+ u8 status;
+
+ if (*work_done >= work_to_do)
+ break;
+ (*work_done)++;
+
+ status = rx_desc->status;
+ skb = buffer_info->skb;
+ buffer_info->skb = NULL;
+
+ ++i;
+ if (i == rx_ring->count)
+ i = 0;
+ next_rxd = E1000_RX_DESC(*rx_ring, i);
+ prefetch(next_rxd);
+
+ next_buffer = &rx_ring->buffer_info[i];
+
+ cleaned = true;
+ cleaned_count++;
+ pci_unmap_page(pdev, buffer_info->dma, PAGE_SIZE,
+ PCI_DMA_FROMDEVICE);
+ buffer_info->dma = 0;
+
+ length = le16_to_cpu(rx_desc->length);
+
+ /* errors is only valid for DD + EOP descriptors */
+ if (unlikely((status & E1000_RXD_STAT_EOP) &&
+ (rx_desc->errors & E1000_RXD_ERR_FRAME_ERR_MASK))) {
+ /* recycle both page and skb */
+ buffer_info->skb = skb;
+ /* an error means any chain goes out the window
+ * too */
+ if (rx_ring->rx_skb_top)
+ dev_kfree_skb(rx_ring->rx_skb_top);
+ rx_ring->rx_skb_top = NULL;
+ goto next_desc;
+ }
+
+#define rxtop rx_ring->rx_skb_top
+ if (!(status & E1000_RXD_STAT_EOP)) {
+ /* this descriptor is only the beginning (or middle) */
+ if (!rxtop) {
+ /* this is the beginning of a chain */
+ rxtop = skb;
+ skb_fill_page_desc(rxtop, 0, buffer_info->page,
+ 0, length);
+ } else {
+ /* this is the middle of a chain */
+ skb_fill_page_desc(rxtop,
+ skb_shinfo(rxtop)->nr_frags,
+ buffer_info->page, 0, length);
+ /* re-use the skb, only consumed the page */
+ buffer_info->skb = skb;
+ }
+ e1000_consume_page(buffer_info, rxtop, length);
+ goto next_desc;
+ } else {
+ if (rxtop) {
+ /* end of the chain */
+ skb_fill_page_desc(rxtop,
+ skb_shinfo(rxtop)->nr_frags,
+ buffer_info->page, 0, length);
+ /* re-use the current skb, we only consumed the
+ * page */
+ buffer_info->skb = skb;
+ skb = rxtop;
+ rxtop = NULL;
+ e1000_consume_page(buffer_info, skb, length);
+ } else {
+ /* no chain, got EOP, this buf is the packet
+ * copybreak to save the put_page/alloc_page */
+ if (length <= copybreak &&
+ skb_tailroom(skb) >= length) {
+ u8 *vaddr;
+ vaddr = kmap_atomic(buffer_info->page,
+ KM_SKB_DATA_SOFTIRQ);
+ memcpy(skb_tail_pointer(skb), vaddr,
+ length);
+ kunmap_atomic(vaddr,
+ KM_SKB_DATA_SOFTIRQ);
+ /* re-use the page, so don't erase
+ * buffer_info->page */
+ skb_put(skb, length);
+ } else {
+ skb_fill_page_desc(skb, 0,
+ buffer_info->page, 0,
+ length);
+ e1000_consume_page(buffer_info, skb,
+ length);
+ }
+ }
+ }
+
+ /* Receive Checksum Offload XXX recompute due to CRC strip? */
+ e1000_rx_checksum(adapter,
+ (u32)(status) |
+ ((u32)(rx_desc->errors) << 24),
+ le16_to_cpu(rx_desc->csum), skb);
+
+ /* probably a little skewed due to removing CRC */
+ total_rx_bytes += skb->len;
+ total_rx_packets++;
+
+ /* eth type trans needs skb->data to point to something */
+ if (!pskb_may_pull(skb, ETH_HLEN)) {
+ ndev_err(netdev, "pskb_may_pull failed.\n");
+ dev_kfree_skb(skb);
+ goto next_desc;
+ }
+
+ e1000_receive_skb(adapter, netdev, skb, status,
+ rx_desc->special);
+
+next_desc:
+ rx_desc->status = 0;
+
+ /* return some buffers to hardware, one at a time is too slow */
+ if (unlikely(cleaned_count >= E1000_RX_BUFFER_WRITE)) {
+ adapter->alloc_rx_buf(adapter, cleaned_count);
+ cleaned_count = 0;
+ }
+
+ /* use prefetched values */
+ rx_desc = next_rxd;
+ buffer_info = next_buffer;
+ }
+ rx_ring->next_to_clean = i;
+
+ cleaned_count = e1000_desc_unused(rx_ring);
+ if (cleaned_count)
+ adapter->alloc_rx_buf(adapter, cleaned_count);
+
+ adapter->total_rx_bytes += total_rx_bytes;
+ adapter->total_rx_packets += total_rx_packets;
+ adapter->net_stats.rx_bytes += total_rx_bytes;
+ adapter->net_stats.rx_packets += total_rx_packets;
+ return cleaned;
+}
+
/**
* e1000_clean_rx_ring - Free Rx Buffers per Queue
* @adapter: board private structure
pci_unmap_single(pdev, buffer_info->dma,
adapter->rx_buffer_len,
PCI_DMA_FROMDEVICE);
+ else if (adapter->clean_rx == e1000_clean_jumbo_rx_irq)
+ pci_unmap_page(pdev, buffer_info->dma,
+ PAGE_SIZE,
+ PCI_DMA_FROMDEVICE);
else if (adapter->clean_rx == e1000_clean_rx_irq_ps)
pci_unmap_single(pdev, buffer_info->dma,
adapter->rx_ps_bsize0,
buffer_info->dma = 0;
}
+ if (buffer_info->page) {
+ put_page(buffer_info->page);
+ buffer_info->page = NULL;
+ }
+
if (buffer_info->skb) {
dev_kfree_skb(buffer_info->skb);
buffer_info->skb = NULL;
* a lot of memory, since we allocate 3 pages at all times
* per packet.
*/
- adapter->rx_ps_pages = 0;
pages = PAGE_USE_COUNT(adapter->netdev->mtu);
- if ((pages <= 3) && (PAGE_SIZE <= 16384) && (rctl & E1000_RCTL_LPE))
+ if (!(adapter->flags & FLAG_IS_ICH) && (pages <= 3) &&
+ (PAGE_SIZE <= 16384) && (rctl & E1000_RCTL_LPE))
adapter->rx_ps_pages = pages;
+ else
+ adapter->rx_ps_pages = 0;
if (adapter->rx_ps_pages) {
/* Configure extra packet-split registers */
sizeof(union e1000_rx_desc_packet_split);
adapter->clean_rx = e1000_clean_rx_irq_ps;
adapter->alloc_rx_buf = e1000_alloc_rx_buffers_ps;
+ } else if (adapter->netdev->mtu > ETH_FRAME_LEN + ETH_FCS_LEN) {
+ rdlen = rx_ring->count * sizeof(struct e1000_rx_desc);
+ adapter->clean_rx = e1000_clean_jumbo_rx_irq;
+ adapter->alloc_rx_buf = e1000_alloc_jumbo_rx_buffers;
} else {
- rdlen = rx_ring->count *
- sizeof(struct e1000_rx_desc);
+ rdlen = rx_ring->count * sizeof(struct e1000_rx_desc);
adapter->clean_rx = e1000_clean_rx_irq;
adapter->alloc_rx_buf = e1000_alloc_rx_buffers;
}
* units), e.g. using jumbo frames when setting to E1000_ERT_2048
*/
if ((adapter->flags & FLAG_HAS_ERT) &&
- (adapter->netdev->mtu > ETH_DATA_LEN))
- ew32(ERT, E1000_ERT_2048);
+ (adapter->netdev->mtu > ETH_DATA_LEN)) {
+ u32 rxdctl = er32(RXDCTL(0));
+ ew32(RXDCTL(0), rxdctl | 0x3);
+ ew32(ERT, E1000_ERT_2048 | (1 << 13));
+ /*
+ * With jumbo frames and early-receive enabled, excessive
+ * C4->C2 latencies result in dropped transactions.
+ */
+ pm_qos_update_requirement(PM_QOS_CPU_DMA_LATENCY,
+ e1000e_driver_name, 55);
+ } else {
+ pm_qos_update_requirement(PM_QOS_CPU_DMA_LATENCY,
+ e1000e_driver_name,
+ PM_QOS_DEFAULT_VALUE);
+ }
/* Enable Receives */
ew32(RCTL, rctl);
/* Allow time for pending master requests to run */
mac->ops.reset_hw(hw);
+
+ /*
+ * For parts with AMT enabled, let the firmware know
+ * that the network interface is in control
+ */
+ if ((adapter->flags & FLAG_HAS_AMT) && e1000e_check_mng_mode(hw))
+ e1000_get_hw_control(adapter);
+
ew32(WUC, 0);
if (mac->ops.init_hw(hw))
* means we reserve 2 more, this pushes us to allocate from the next
* larger slab size.
* i.e. RXBUFFER_2048 --> size-4096 slab
+ * However with the new *_jumbo_rx* routines, jumbo receives will use
+ * fragmented skbs
*/
if (max_frame <= 256)
ew32(CTRL_EXT, ctrl_ext);
}
+ if (adapter->flags & FLAG_IS_ICH)
+ e1000e_disable_gig_wol_ich8lan(&adapter->hw);
+
/* Allow time for pending master requests to run */
e1000e_disable_pcie_master(&adapter->hw);
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH9_IFE_GT), board_ich9lan },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH9_IGP_AMT), board_ich9lan },
{ PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH9_IGP_C), board_ich9lan },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH9_IGP_M), board_ich9lan },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH9_IGP_M_AMT), board_ich9lan },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH9_IGP_M_V), board_ich9lan },
+
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH10_R_BM_LM), board_ich9lan },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH10_R_BM_LF), board_ich9lan },
+ { PCI_VDEVICE(INTEL, E1000_DEV_ID_ICH10_R_BM_V), board_ich9lan },
{ } /* terminate list */
};
printk(KERN_INFO "%s: Copyright (c) 1999-2008 Intel Corporation.\n",
e1000e_driver_name);
ret = pci_register_driver(&e1000_driver);
-
+ pm_qos_add_requirement(PM_QOS_CPU_DMA_LATENCY, e1000e_driver_name,
+ PM_QOS_DEFAULT_VALUE);
+
return ret;
}
module_init(e1000_init_module);
static void __exit e1000_exit_module(void)
{
pci_unregister_driver(&e1000_driver);
+ pm_qos_remove_requirement(PM_QOS_CPU_DMA_LATENCY, e1000e_driver_name);
}
module_exit(e1000_exit_module);
static s32 e1000_phy_force_speed_duplex(struct e1000_hw *hw);
static s32 e1000_set_d0_lplu_state(struct e1000_hw *hw, bool active);
static s32 e1000_wait_autoneg(struct e1000_hw *hw);
+static u32 e1000_get_phy_addr_for_bm_page(u32 page, u32 reg);
+static s32 e1000_access_phy_wakeup_reg_bm(struct e1000_hw *hw, u32 offset,
+ u16 *data, bool read);
/* Cable length tables */
static const u16 e1000_m88_cable_length_table[] =
if (phy->disable_polarity_correction == 1)
phy_data |= M88E1000_PSCR_POLARITY_REVERSAL;
+ /* Enable downshift on BM (disabled by default) */
+ if (phy->type == e1000_phy_bm)
+ phy_data |= BME1000_PSCR_ENABLE_DOWNSHIFT;
+
ret_val = e1e_wphy(hw, M88E1000_PHY_SPEC_CTRL, phy_data);
if (ret_val)
return ret_val;
case IFE_C_E_PHY_ID:
phy_type = e1000_phy_ife;
break;
+ case BME1000_E_PHY_ID:
+ case BME1000_E_PHY_ID_R2:
+ phy_type = e1000_phy_bm;
+ break;
default:
phy_type = e1000_phy_unknown;
break;
return phy_type;
}
+/**
+ * e1000e_determine_phy_address - Determines PHY address.
+ * @hw: pointer to the HW structure
+ *
+ * This uses a trial and error method to loop through possible PHY
+ * addresses. It tests each by reading the PHY ID registers and
+ * checking for a match.
+ **/
+s32 e1000e_determine_phy_address(struct e1000_hw *hw)
+{
+ s32 ret_val = -E1000_ERR_PHY_TYPE;
+ u32 phy_addr= 0;
+ u32 i = 0;
+ enum e1000_phy_type phy_type = e1000_phy_unknown;
+
+ do {
+ for (phy_addr = 0; phy_addr < 4; phy_addr++) {
+ hw->phy.addr = phy_addr;
+ e1000e_get_phy_id(hw);
+ phy_type = e1000e_get_phy_type_from_id(hw->phy.id);
+
+ /*
+ * If phy_type is valid, break - we found our
+ * PHY address
+ */
+ if (phy_type != e1000_phy_unknown) {
+ ret_val = 0;
+ break;
+ }
+ }
+ i++;
+ } while ((ret_val != 0) && (i < 100));
+
+ return ret_val;
+}
+
+/**
+ * e1000_get_phy_addr_for_bm_page - Retrieve PHY page address
+ * @page: page to access
+ *
+ * Returns the phy address for the page requested.
+ **/
+static u32 e1000_get_phy_addr_for_bm_page(u32 page, u32 reg)
+{
+ u32 phy_addr = 2;
+
+ if ((page >= 768) || (page == 0 && reg == 25) || (reg == 31))
+ phy_addr = 1;
+
+ return phy_addr;
+}
+
+/**
+ * e1000e_write_phy_reg_bm - Write BM PHY register
+ * @hw: pointer to the HW structure
+ * @offset: register offset to write to
+ * @data: data to write at register offset
+ *
+ * Acquires semaphore, if necessary, then writes the data to PHY register
+ * at the offset. Release any acquired semaphores before exiting.
+ **/
+s32 e1000e_write_phy_reg_bm(struct e1000_hw *hw, u32 offset, u16 data)
+{
+ s32 ret_val;
+ u32 page_select = 0;
+ u32 page = offset >> IGP_PAGE_SHIFT;
+ u32 page_shift = 0;
+
+ /* Page 800 works differently than the rest so it has its own func */
+ if (page == BM_WUC_PAGE) {
+ ret_val = e1000_access_phy_wakeup_reg_bm(hw, offset, &data,
+ false);
+ goto out;
+ }
+
+ ret_val = hw->phy.ops.acquire_phy(hw);
+ if (ret_val)
+ goto out;
+
+ hw->phy.addr = e1000_get_phy_addr_for_bm_page(page, offset);
+
+ if (offset > MAX_PHY_MULTI_PAGE_REG) {
+ /*
+ * Page select is register 31 for phy address 1 and 22 for
+ * phy address 2 and 3. Page select is shifted only for
+ * phy address 1.
+ */
+ if (hw->phy.addr == 1) {
+ page_shift = IGP_PAGE_SHIFT;
+ page_select = IGP01E1000_PHY_PAGE_SELECT;
+ } else {
+ page_shift = 0;
+ page_select = BM_PHY_PAGE_SELECT;
+ }
+
+ /* Page is shifted left, PHY expects (page x 32) */
+ ret_val = e1000e_write_phy_reg_mdic(hw, page_select,
+ (page << page_shift));
+ if (ret_val) {
+ hw->phy.ops.release_phy(hw);
+ goto out;
+ }
+ }
+
+ ret_val = e1000e_write_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
+ data);
+
+ hw->phy.ops.release_phy(hw);
+
+out:
+ return ret_val;
+}
+
+/**
+ * e1000e_read_phy_reg_bm - Read BM PHY register
+ * @hw: pointer to the HW structure
+ * @offset: register offset to be read
+ * @data: pointer to the read data
+ *
+ * Acquires semaphore, if necessary, then reads the PHY register at offset
+ * and storing the retrieved information in data. Release any acquired
+ * semaphores before exiting.
+ **/
+s32 e1000e_read_phy_reg_bm(struct e1000_hw *hw, u32 offset, u16 *data)
+{
+ s32 ret_val;
+ u32 page_select = 0;
+ u32 page = offset >> IGP_PAGE_SHIFT;
+ u32 page_shift = 0;
+
+ /* Page 800 works differently than the rest so it has its own func */
+ if (page == BM_WUC_PAGE) {
+ ret_val = e1000_access_phy_wakeup_reg_bm(hw, offset, data,
+ true);
+ goto out;
+ }
+
+ ret_val = hw->phy.ops.acquire_phy(hw);
+ if (ret_val)
+ goto out;
+
+ hw->phy.addr = e1000_get_phy_addr_for_bm_page(page, offset);
+
+ if (offset > MAX_PHY_MULTI_PAGE_REG) {
+ /*
+ * Page select is register 31 for phy address 1 and 22 for
+ * phy address 2 and 3. Page select is shifted only for
+ * phy address 1.
+ */
+ if (hw->phy.addr == 1) {
+ page_shift = IGP_PAGE_SHIFT;
+ page_select = IGP01E1000_PHY_PAGE_SELECT;
+ } else {
+ page_shift = 0;
+ page_select = BM_PHY_PAGE_SELECT;
+ }
+
+ /* Page is shifted left, PHY expects (page x 32) */
+ ret_val = e1000e_write_phy_reg_mdic(hw, page_select,
+ (page << page_shift));
+ if (ret_val) {
+ hw->phy.ops.release_phy(hw);
+ goto out;
+ }
+ }
+
+ ret_val = e1000e_read_phy_reg_mdic(hw, MAX_PHY_REG_ADDRESS & offset,
+ data);
+ hw->phy.ops.release_phy(hw);
+
+out:
+ return ret_val;
+}
+
+/**
+ * e1000_access_phy_wakeup_reg_bm - Read BM PHY wakeup register
+ * @hw: pointer to the HW structure
+ * @offset: register offset to be read or written
+ * @data: pointer to the data to read or write
+ * @read: determines if operation is read or write
+ *
+ * Acquires semaphore, if necessary, then reads the PHY register at offset
+ * and storing the retrieved information in data. Release any acquired
+ * semaphores before exiting. Note that procedure to read the wakeup
+ * registers are different. It works as such:
+ * 1) Set page 769, register 17, bit 2 = 1
+ * 2) Set page to 800 for host (801 if we were manageability)
+ * 3) Write the address using the address opcode (0x11)
+ * 4) Read or write the data using the data opcode (0x12)
+ * 5) Restore 769_17.2 to its original value
+ **/
+static s32 e1000_access_phy_wakeup_reg_bm(struct e1000_hw *hw, u32 offset,
+ u16 *data, bool read)
+{
+ s32 ret_val;
+ u16 reg = ((u16)offset) & PHY_REG_MASK;
+ u16 phy_reg = 0;
+ u8 phy_acquired = 1;
+
+
+ ret_val = hw->phy.ops.acquire_phy(hw);
+ if (ret_val) {
+ phy_acquired = 0;
+ goto out;
+ }
+
+ /* All operations in this function are phy address 1 */
+ hw->phy.addr = 1;
+
+ /* Set page 769 */
+ e1000e_write_phy_reg_mdic(hw, IGP01E1000_PHY_PAGE_SELECT,
+ (BM_WUC_ENABLE_PAGE << IGP_PAGE_SHIFT));
+
+ ret_val = e1000e_read_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, &phy_reg);
+ if (ret_val)
+ goto out;
+
+ /* First clear bit 4 to avoid a power state change */
+ phy_reg &= ~(BM_WUC_HOST_WU_BIT);
+ ret_val = e1000e_write_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, phy_reg);
+ if (ret_val)
+ goto out;
+
+ /* Write bit 2 = 1, and clear bit 4 to 769_17 */
+ ret_val = e1000e_write_phy_reg_mdic(hw, BM_WUC_ENABLE_REG,
+ phy_reg | BM_WUC_ENABLE_BIT);
+ if (ret_val)
+ goto out;
+
+ /* Select page 800 */
+ ret_val = e1000e_write_phy_reg_mdic(hw, IGP01E1000_PHY_PAGE_SELECT,
+ (BM_WUC_PAGE << IGP_PAGE_SHIFT));
+
+ /* Write the page 800 offset value using opcode 0x11 */
+ ret_val = e1000e_write_phy_reg_mdic(hw, BM_WUC_ADDRESS_OPCODE, reg);
+ if (ret_val)
+ goto out;
+
+ if (read) {
+ /* Read the page 800 value using opcode 0x12 */
+ ret_val = e1000e_read_phy_reg_mdic(hw, BM_WUC_DATA_OPCODE,
+ data);
+ } else {
+ /* Read the page 800 value using opcode 0x12 */
+ ret_val = e1000e_write_phy_reg_mdic(hw, BM_WUC_DATA_OPCODE,
+ *data);
+ }
+
+ if (ret_val)
+ goto out;
+
+ /*
+ * Restore 769_17.2 to its original value
+ * Set page 769
+ */
+ e1000e_write_phy_reg_mdic(hw, IGP01E1000_PHY_PAGE_SELECT,
+ (BM_WUC_ENABLE_PAGE << IGP_PAGE_SHIFT));
+
+ /* Clear 769_17.2 */
+ ret_val = e1000e_write_phy_reg_mdic(hw, BM_WUC_ENABLE_REG, phy_reg);
+
+out:
+ if (phy_acquired == 1)
+ hw->phy.ops.release_phy(hw);
+ return ret_val;
+}
+
/**
* e1000e_commit_phy - Soft PHY reset
* @hw: pointer to the HW structure
0x0000,Cmd_MCast,
0x0076, /* link to next command */
#define CONF_NR_MULTICAST 0x44
- 0x0000, /* number of multicast addresses */
+ 0x0000, /* number of bytes in multicast address(es) */
#define CONF_MULTICAST 0x46
0x0000, 0x0000, 0x0000, /* some addresses */
0x0000, 0x0000, 0x0000,
static void eexp_setup_filter(struct net_device *dev)
{
- struct dev_mc_list *dmi = dev->mc_list;
+ struct dev_mc_list *dmi;
unsigned short ioaddr = dev->base_addr;
int count = dev->mc_count;
int i;
}
outw(CONF_NR_MULTICAST & ~31, ioaddr+SM_PTR);
- outw(count, ioaddr+SHADOW(CONF_NR_MULTICAST));
- for (i = 0; i < count; i++) {
- unsigned short *data = (unsigned short *)dmi->dmi_addr;
+ outw(6*count, ioaddr+SHADOW(CONF_NR_MULTICAST));
+ for (i = 0, dmi = dev->mc_list; i < count; i++, dmi = dmi->next) {
+ unsigned short *data;
if (!dmi) {
printk(KERN_INFO "%s: too few multicast addresses\n", dev->name);
break;
printk(KERN_INFO "%s: invalid multicast address length given.\n", dev->name);
continue;
}
+ data = (unsigned short *)dmi->dmi_addr;
outw((CONF_MULTICAST+(6*i)) & ~31, ioaddr+SM_PTR);
outw(data[0], ioaddr+SHADOW(CONF_MULTICAST+(6*i)));
outw((CONF_MULTICAST+(6*i)+2) & ~31, ioaddr+SM_PTR);
ret = of_address_to_resource(ofdev->node, 0, &res);
if (ret)
- return ret;
+ goto out_res;
snprintf(new_bus->id, MII_BUS_ID_SIZE, "%x", res.start);
kfree(new_bus->irq);
out_unmap_regs:
iounmap(fec->fecp);
+out_res:
out_fec:
kfree(fec);
out_mii:
static void gfar_netpoll(struct net_device *dev);
#endif
int gfar_clean_rx_ring(struct net_device *dev, int rx_work_limit);
+static int gfar_clean_tx_ring(struct net_device *dev);
static int gfar_process_frame(struct net_device *dev, struct sk_buff *skb, int length);
static void gfar_vlan_rx_register(struct net_device *netdev,
struct vlan_group *grp);
}
/* Changes the mac address if the controller is not running. */
-int gfar_set_mac_address(struct net_device *dev)
+static int gfar_set_mac_address(struct net_device *dev)
{
gfar_set_mac_for_addr(dev, 0, dev->dev_addr);
}
/* Interrupt Handler for Transmit complete */
-int gfar_clean_tx_ring(struct net_device *dev)
+static int gfar_clean_tx_ring(struct net_device *dev)
{
struct txbd8 *bdp;
struct gfar_private *priv = netdev_priv(dev);
extern void gfar_phy_test(struct mii_bus *bus, struct phy_device *phydev,
int enable, u32 regnum, u32 read);
void gfar_init_sysfs(struct net_device *dev);
+int gfar_local_mdio_write(struct gfar_mii __iomem *regs, int mii_id,
+ int regnum, u16 value);
+int gfar_local_mdio_read(struct gfar_mii __iomem *regs, int mii_id, int regnum);
#endif /* __GIANFAR_H */
spin_lock_irqsave(&priv->rxlock, flags);
if (length > priv->rx_buffer_size)
- return count;
+ goto out;
if (length == priv->rx_stash_size)
- return count;
+ goto out;
priv->rx_stash_size = length;
gfar_write(&priv->regs->attr, temp);
+out:
spin_unlock_irqrestore(&priv->rxlock, flags);
return count;
spin_lock_irqsave(&priv->rxlock, flags);
if (index > priv->rx_stash_size)
- return count;
+ goto out;
if (index == priv->rx_stash_index)
- return count;
+ goto out;
priv->rx_stash_index = index;
temp |= ATTRELI_EI(index);
gfar_write(&priv->regs->attreli, flags);
+out:
spin_unlock_irqrestore(&priv->rxlock, flags);
return count;
unregister_netdevice(dev);
if (list_empty(&port->vlans))
- macvlan_port_destroy(dev);
+ macvlan_port_destroy(port->dev);
}
static struct rtnl_link_ops macvlan_link_ops __read_mostly = {
*/
#define PHY_ADDR_REG 0x0000
#define SMI_REG 0x0004
+#define WINDOW_BASE(i) (0x0200 + ((i) << 3))
+#define WINDOW_SIZE(i) (0x0204 + ((i) << 3))
+#define WINDOW_REMAP_HIGH(i) (0x0280 + ((i) << 2))
+#define WINDOW_BAR_ENABLE 0x0290
+#define WINDOW_PROTECT(i) (0x0294 + ((i) << 4))
/*
* Per-port registers.
u32 late_collision;
};
+struct mv643xx_shared_private {
+ void __iomem *eth_base;
+
+ /* used to protect SMI_REG, which is shared across ports */
+ spinlock_t phy_lock;
+
+ u32 win_protect;
+
+ unsigned int t_clk;
+};
+
struct mv643xx_private {
+ struct mv643xx_shared_private *shared;
int port_num; /* User Ethernet port number */
+ struct mv643xx_shared_private *shared_smi;
+
u32 rx_sram_addr; /* Base address of rx sram area */
u32 rx_sram_size; /* Size of rx sram area */
u32 tx_sram_addr; /* Base address of tx sram area */
static char mv643xx_driver_name[] = "mv643xx_eth";
static char mv643xx_driver_version[] = "1.0";
-static void __iomem *mv643xx_eth_base;
-
-/* used to protect SMI_REG, which is shared across ports */
-static DEFINE_SPINLOCK(mv643xx_eth_phy_lock);
-
static inline u32 rdl(struct mv643xx_private *mp, int offset)
{
- return readl(mv643xx_eth_base + offset);
+ return readl(mp->shared->eth_base + offset);
}
static inline void wrl(struct mv643xx_private *mp, int offset, u32 data)
{
- writel(data, mv643xx_eth_base + offset);
+ writel(data, mp->shared->eth_base + offset);
}
/*
*
* INPUT:
* struct mv643xx_private *mp Ethernet port
- * unsigned int t_clk t_clk of the MV-643xx chip in HZ units
* unsigned int delay Delay in usec
*
* OUTPUT:
*
*/
static unsigned int eth_port_set_rx_coal(struct mv643xx_private *mp,
- unsigned int t_clk, unsigned int delay)
+ unsigned int delay)
{
unsigned int port_num = mp->port_num;
- unsigned int coal = ((t_clk / 1000000) * delay) / 64;
+ unsigned int coal = ((mp->shared->t_clk / 1000000) * delay) / 64;
/* Set RX Coalescing mechanism */
wrl(mp, SDMA_CONFIG_REG(port_num),
*
* INPUT:
* struct mv643xx_private *mp Ethernet port
- * unsigned int t_clk t_clk of the MV-643xx chip in HZ units
* unsigned int delay Delay in uSeconds
*
* OUTPUT:
*
*/
static unsigned int eth_port_set_tx_coal(struct mv643xx_private *mp,
- unsigned int t_clk, unsigned int delay)
+ unsigned int delay)
{
- unsigned int coal = ((t_clk / 1000000) * delay) / 64;
+ unsigned int coal = ((mp->shared->t_clk / 1000000) * delay) / 64;
/* Set TX Coalescing mechanism */
wrl(mp, TX_FIFO_URGENT_THRESHOLD_REG(mp->port_num), coal << 4);
#ifdef MV643XX_COAL
mp->rx_int_coal =
- eth_port_set_rx_coal(mp, 133000000, MV643XX_RX_COAL);
+ eth_port_set_rx_coal(mp, MV643XX_RX_COAL);
#endif
mp->tx_int_coal =
- eth_port_set_tx_coal(mp, 133000000, MV643XX_TX_COAL);
+ eth_port_set_tx_coal(mp, MV643XX_TX_COAL);
/* Unmask phy and link status changes interrupts */
wrl(mp, INTERRUPT_EXTEND_MASK_REG(port_num), ETH_INT_UNMASK_ALL_EXT);
return -ENODEV;
}
+ if (pd->shared == NULL) {
+ printk(KERN_ERR "No mv643xx_eth_platform_data->shared\n");
+ return -ENODEV;
+ }
+
dev = alloc_etherdev(sizeof(struct mv643xx_private));
if (!dev)
return -ENOMEM;
spin_lock_init(&mp->lock);
+ mp->shared = platform_get_drvdata(pd->shared);
port_num = mp->port_num = pd->port_number;
+ if (mp->shared->win_protect)
+ wrl(mp, WINDOW_PROTECT(port_num), mp->shared->win_protect);
+
+ mp->shared_smi = mp->shared;
+ if (pd->shared_smi != NULL)
+ mp->shared_smi = platform_get_drvdata(pd->shared_smi);
+
/* set default config values */
eth_port_uc_addr_get(mp, dev->dev_addr);
mp->rx_ring_size = PORT_DEFAULT_RECEIVE_QUEUE_SIZE;
return 0;
}
+static void mv643xx_eth_conf_mbus_windows(struct mv643xx_shared_private *msp,
+ struct mbus_dram_target_info *dram)
+{
+ void __iomem *base = msp->eth_base;
+ u32 win_enable;
+ u32 win_protect;
+ int i;
+
+ for (i = 0; i < 6; i++) {
+ writel(0, base + WINDOW_BASE(i));
+ writel(0, base + WINDOW_SIZE(i));
+ if (i < 4)
+ writel(0, base + WINDOW_REMAP_HIGH(i));
+ }
+
+ win_enable = 0x3f;
+ win_protect = 0;
+
+ for (i = 0; i < dram->num_cs; i++) {
+ struct mbus_dram_window *cs = dram->cs + i;
+
+ writel((cs->base & 0xffff0000) |
+ (cs->mbus_attr << 8) |
+ dram->mbus_dram_target_id, base + WINDOW_BASE(i));
+ writel((cs->size - 1) & 0xffff0000, base + WINDOW_SIZE(i));
+
+ win_enable &= ~(1 << i);
+ win_protect |= 3 << (2 * i);
+ }
+
+ writel(win_enable, base + WINDOW_BAR_ENABLE);
+ msp->win_protect = win_protect;
+}
+
static int mv643xx_eth_shared_probe(struct platform_device *pdev)
{
static int mv643xx_version_printed = 0;
+ struct mv643xx_eth_shared_platform_data *pd = pdev->dev.platform_data;
+ struct mv643xx_shared_private *msp;
struct resource *res;
+ int ret;
if (!mv643xx_version_printed++)
printk(KERN_NOTICE "MV-643xx 10/100/1000 Ethernet Driver\n");
+ ret = -EINVAL;
res = platform_get_resource(pdev, IORESOURCE_MEM, 0);
if (res == NULL)
- return -ENODEV;
+ goto out;
- mv643xx_eth_base = ioremap(res->start, res->end - res->start + 1);
- if (mv643xx_eth_base == NULL)
- return -ENOMEM;
+ ret = -ENOMEM;
+ msp = kmalloc(sizeof(*msp), GFP_KERNEL);
+ if (msp == NULL)
+ goto out;
+ memset(msp, 0, sizeof(*msp));
+
+ msp->eth_base = ioremap(res->start, res->end - res->start + 1);
+ if (msp->eth_base == NULL)
+ goto out_free;
+
+ spin_lock_init(&msp->phy_lock);
+ msp->t_clk = (pd != NULL && pd->t_clk != 0) ? pd->t_clk : 133000000;
+
+ platform_set_drvdata(pdev, msp);
+
+ /*
+ * (Re-)program MBUS remapping windows if we are asked to.
+ */
+ if (pd != NULL && pd->dram != NULL)
+ mv643xx_eth_conf_mbus_windows(msp, pd->dram);
return 0;
+out_free:
+ kfree(msp);
+out:
+ return ret;
}
static int mv643xx_eth_shared_remove(struct platform_device *pdev)
{
- iounmap(mv643xx_eth_base);
- mv643xx_eth_base = NULL;
+ struct mv643xx_shared_private *msp = platform_get_drvdata(pdev);
+
+ iounmap(msp->eth_base);
+ kfree(msp);
return 0;
}
static void eth_port_read_smi_reg(struct mv643xx_private *mp,
unsigned int phy_reg, unsigned int *value)
{
+ void __iomem *smi_reg = mp->shared_smi->eth_base + SMI_REG;
int phy_addr = ethernet_phy_get(mp);
unsigned long flags;
int i;
/* the SMI register is a shared resource */
- spin_lock_irqsave(&mv643xx_eth_phy_lock, flags);
+ spin_lock_irqsave(&mp->shared_smi->phy_lock, flags);
/* wait for the SMI register to become available */
- for (i = 0; rdl(mp, SMI_REG) & ETH_SMI_BUSY; i++) {
+ for (i = 0; readl(smi_reg) & ETH_SMI_BUSY; i++) {
if (i == PHY_WAIT_ITERATIONS) {
printk("%s: PHY busy timeout\n", mp->dev->name);
goto out;
udelay(PHY_WAIT_MICRO_SECONDS);
}
- wrl(mp, SMI_REG,
- (phy_addr << 16) | (phy_reg << 21) | ETH_SMI_OPCODE_READ);
+ writel((phy_addr << 16) | (phy_reg << 21) | ETH_SMI_OPCODE_READ,
+ smi_reg);
/* now wait for the data to be valid */
- for (i = 0; !(rdl(mp, SMI_REG) & ETH_SMI_READ_VALID); i++) {
+ for (i = 0; !(readl(smi_reg) & ETH_SMI_READ_VALID); i++) {
if (i == PHY_WAIT_ITERATIONS) {
printk("%s: PHY read timeout\n", mp->dev->name);
goto out;
udelay(PHY_WAIT_MICRO_SECONDS);
}
- *value = rdl(mp, SMI_REG) & 0xffff;
+ *value = readl(smi_reg) & 0xffff;
out:
- spin_unlock_irqrestore(&mv643xx_eth_phy_lock, flags);
+ spin_unlock_irqrestore(&mp->shared_smi->phy_lock, flags);
}
/*
static void eth_port_write_smi_reg(struct mv643xx_private *mp,
unsigned int phy_reg, unsigned int value)
{
- int phy_addr;
- int i;
+ void __iomem *smi_reg = mp->shared_smi->eth_base + SMI_REG;
+ int phy_addr = ethernet_phy_get(mp);
unsigned long flags;
-
- phy_addr = ethernet_phy_get(mp);
+ int i;
/* the SMI register is a shared resource */
- spin_lock_irqsave(&mv643xx_eth_phy_lock, flags);
+ spin_lock_irqsave(&mp->shared_smi->phy_lock, flags);
/* wait for the SMI register to become available */
- for (i = 0; rdl(mp, SMI_REG) & ETH_SMI_BUSY; i++) {
+ for (i = 0; readl(smi_reg) & ETH_SMI_BUSY; i++) {
if (i == PHY_WAIT_ITERATIONS) {
printk("%s: PHY busy timeout\n", mp->dev->name);
goto out;
udelay(PHY_WAIT_MICRO_SECONDS);
}
- wrl(mp, SMI_REG, (phy_addr << 16) | (phy_reg << 21) |
- ETH_SMI_OPCODE_WRITE | (value & 0xffff));
+ writel((phy_addr << 16) | (phy_reg << 21) |
+ ETH_SMI_OPCODE_WRITE | (value & 0xffff), smi_reg);
out:
- spin_unlock_irqrestore(&mv643xx_eth_phy_lock, flags);
+ spin_unlock_irqrestore(&mp->shared_smi->phy_lock, flags);
}
/*
*************************************************************************/
#define DRV_NAME "pcnet32"
-#ifdef CONFIG_PCNET32_NAPI
-#define DRV_VERSION "1.34-NAPI"
-#else
-#define DRV_VERSION "1.34"
-#endif
-#define DRV_RELDATE "14.Aug.2007"
+#define DRV_VERSION "1.35"
+#define DRV_RELDATE "21.Apr.2008"
#define PFX DRV_NAME ": "
static const char *const version =
static void pcnet32_netif_stop(struct net_device *dev)
{
-#ifdef CONFIG_PCNET32_NAPI
struct pcnet32_private *lp = netdev_priv(dev);
-#endif
+
dev->trans_start = jiffies;
-#ifdef CONFIG_PCNET32_NAPI
napi_disable(&lp->napi);
-#endif
netif_tx_disable(dev);
}
static void pcnet32_netif_start(struct net_device *dev)
{
-#ifdef CONFIG_PCNET32_NAPI
struct pcnet32_private *lp = netdev_priv(dev);
ulong ioaddr = dev->base_addr;
u16 val;
-#endif
+
netif_wake_queue(dev);
-#ifdef CONFIG_PCNET32_NAPI
val = lp->a.read_csr(ioaddr, CSR3);
val &= 0x00ff;
lp->a.write_csr(ioaddr, CSR3, val);
napi_enable(&lp->napi);
-#endif
}
/*
rc = 1; /* default to fail */
if (netif_running(dev))
-#ifdef CONFIG_PCNET32_NAPI
pcnet32_netif_stop(dev);
-#else
- pcnet32_close(dev);
-#endif
spin_lock_irqsave(&lp->lock, flags);
lp->a.write_csr(ioaddr, CSR0, CSR0_STOP); /* stop the chip */
x = a->read_bcr(ioaddr, 32); /* reset internal loopback */
a->write_bcr(ioaddr, 32, (x & ~0x0002));
-#ifdef CONFIG_PCNET32_NAPI
if (netif_running(dev)) {
pcnet32_netif_start(dev);
pcnet32_restart(dev, CSR0_NORMAL);
lp->a.write_bcr(ioaddr, 20, 4); /* return to 16bit mode */
}
spin_unlock_irqrestore(&lp->lock, flags);
-#else
- if (netif_running(dev)) {
- spin_unlock_irqrestore(&lp->lock, flags);
- pcnet32_open(dev);
- } else {
- pcnet32_purge_rx_ring(dev);
- lp->a.write_bcr(ioaddr, 20, 4); /* return to 16bit mode */
- spin_unlock_irqrestore(&lp->lock, flags);
- }
-#endif
return (rc);
} /* end pcnet32_loopback_test */
}
dev->stats.rx_bytes += skb->len;
skb->protocol = eth_type_trans(skb, dev);
-#ifdef CONFIG_PCNET32_NAPI
netif_receive_skb(skb);
-#else
- netif_rx(skb);
-#endif
dev->last_rx = jiffies;
dev->stats.rx_packets++;
return;
return must_restart;
}
-#ifdef CONFIG_PCNET32_NAPI
static int pcnet32_poll(struct napi_struct *napi, int budget)
{
struct pcnet32_private *lp = container_of(napi, struct pcnet32_private, napi);
}
return work_done;
}
-#endif
#define PCNET32_REGS_PER_PHY 32
#define PCNET32_MAX_PHYS 32
/* napi.weight is used in both the napi and non-napi cases */
lp->napi.weight = lp->rx_ring_size / 2;
-#ifdef CONFIG_PCNET32_NAPI
netif_napi_add(dev, &lp->napi, pcnet32_poll, lp->rx_ring_size / 2);
-#endif
if (fdx && !(lp->options & PCNET32_PORT_ASEL) &&
((cards_found >= MAX_UNITS) || full_duplex[cards_found]))
goto err_free_ring;
}
-#ifdef CONFIG_PCNET32_NAPI
napi_enable(&lp->napi);
-#endif
/* Re-initialize the PCNET32, and start it when done. */
lp->a.write_csr(ioaddr, 1, (lp->init_dma_addr & 0xffff));
dev->name, csr0);
/* unlike for the lance, there is no restart needed */
}
-#ifdef CONFIG_PCNET32_NAPI
if (netif_rx_schedule_prep(dev, &lp->napi)) {
u16 val;
/* set interrupt masks */
__netif_rx_schedule(dev, &lp->napi);
break;
}
-#else
- pcnet32_rx(dev, lp->napi.weight);
- if (pcnet32_tx(dev)) {
- /* reset the chip to clear the error condition, then restart */
- lp->a.reset(ioaddr);
- lp->a.write_csr(ioaddr, CSR4, 0x0915); /* auto tx pad */
- pcnet32_restart(dev, CSR0_START);
- netif_wake_queue(dev);
- }
-#endif
csr0 = lp->a.read_csr(ioaddr, CSR0);
}
-#ifndef CONFIG_PCNET32_NAPI
- /* Set interrupt enable. */
- lp->a.write_csr(ioaddr, CSR0, CSR0_INTEN);
-#endif
-
if (netif_msg_intr(lp))
printk(KERN_DEBUG "%s: exiting interrupt, csr0=%#4.4x.\n",
dev->name, lp->a.read_csr(ioaddr, CSR0));
del_timer_sync(&lp->watchdog_timer);
netif_stop_queue(dev);
-#ifdef CONFIG_PCNET32_NAPI
napi_disable(&lp->napi);
-#endif
spin_lock_irqsave(&lp->lock, flags);
* Must not be called from interrupt context, or while the
* phydev->lock is held.
*/
-void phy_error(struct phy_device *phydev)
+static void phy_error(struct phy_device *phydev)
{
mutex_lock(&phydev->lock);
phydev->state = PHY_HALTED;
ULI526X_DBUG(0, "uli526x_open", 0);
- ret = request_irq(dev->irq, &uli526x_interrupt, IRQF_SHARED, dev->name, dev);
- if (ret)
- return ret;
-
/* system variable init */
db->cr6_data = CR6_DEFAULT | uli526x_cr6_user_set;
db->tx_packet_cnt = 0;
/* Initialize ULI526X board */
uli526x_init(dev);
+ ret = request_irq(dev->irq, &uli526x_interrupt, IRQF_SHARED, dev->name, dev);
+ if (ret)
+ return ret;
+
/* Active System Interface */
netif_wake_queue(dev);
* This setup frame initialize ULI526X address filter mode
*/
+#ifdef __BIG_ENDIAN
+#define FLT_SHIFT 16
+#else
+#define FLT_SHIFT 0
+#endif
+
static void send_filter_frame(struct net_device *dev, int mc_cnt)
{
struct uli526x_board_info *db = netdev_priv(dev);
/* Node address */
addrptr = (u16 *) dev->dev_addr;
- *suptr++ = addrptr[0];
- *suptr++ = addrptr[1];
- *suptr++ = addrptr[2];
+ *suptr++ = addrptr[0] << FLT_SHIFT;
+ *suptr++ = addrptr[1] << FLT_SHIFT;
+ *suptr++ = addrptr[2] << FLT_SHIFT;
/* broadcast address */
- *suptr++ = 0xffff;
- *suptr++ = 0xffff;
- *suptr++ = 0xffff;
+ *suptr++ = 0xffff << FLT_SHIFT;
+ *suptr++ = 0xffff << FLT_SHIFT;
+ *suptr++ = 0xffff << FLT_SHIFT;
/* fit the multicast address */
for (mcptr = dev->mc_list, i = 0; i < mc_cnt; i++, mcptr = mcptr->next) {
addrptr = (u16 *) mcptr->dmi_addr;
- *suptr++ = addrptr[0];
- *suptr++ = addrptr[1];
- *suptr++ = addrptr[2];
+ *suptr++ = addrptr[0] << FLT_SHIFT;
+ *suptr++ = addrptr[1] << FLT_SHIFT;
+ *suptr++ = addrptr[2] << FLT_SHIFT;
}
for (; i<14; i++) {
- *suptr++ = 0xffff;
- *suptr++ = 0xffff;
- *suptr++ = 0xffff;
+ *suptr++ = 0xffff << FLT_SHIFT;
+ *suptr++ = 0xffff << FLT_SHIFT;
+ *suptr++ = 0xffff << FLT_SHIFT;
}
/* prepare the setup frame */
#endif /* UGETH_VERBOSE_DEBUG */
#define UGETH_MSG_DEFAULT (NETIF_MSG_IFUP << 1 ) - 1
-void uec_set_ethtool_ops(struct net_device *netdev);
static DEFINE_SPINLOCK(ugeth_lock);
}
}
-static struct sk_buff *get_new_skb(struct ucc_geth_private *ugeth, u8 *bd)
+static struct sk_buff *get_new_skb(struct ucc_geth_private *ugeth,
+ u8 __iomem *bd)
{
struct sk_buff *skb = NULL;
skb->dev = ugeth->dev;
- out_be32(&((struct qe_bd *)bd)->buf,
+ out_be32(&((struct qe_bd __iomem *)bd)->buf,
dma_map_single(NULL,
skb->data,
ugeth->ug_info->uf_info.max_rx_buf_length +
UCC_GETH_RX_DATA_BUF_ALIGNMENT,
DMA_FROM_DEVICE));
- out_be32((u32 *)bd, (R_E | R_I | (in_be32((u32 *)bd) & R_W)));
+ out_be32((u32 __iomem *)bd,
+ (R_E | R_I | (in_be32((u32 __iomem*)bd) & R_W)));
return skb;
}
static int rx_bd_buffer_set(struct ucc_geth_private *ugeth, u8 rxQ)
{
- u8 *bd;
+ u8 __iomem *bd;
u32 bd_status;
struct sk_buff *skb;
int i;
i = 0;
do {
- bd_status = in_be32((u32*)bd);
+ bd_status = in_be32((u32 __iomem *)bd);
skb = get_new_skb(ugeth, bd);
if (!skb) /* If can not allocate data buffer,
}
static int fill_init_enet_entries(struct ucc_geth_private *ugeth,
- volatile u32 *p_start,
+ u32 *p_start,
u8 num_entries,
u32 thread_size,
u32 thread_alignment,
}
static int return_init_enet_entries(struct ucc_geth_private *ugeth,
- volatile u32 *p_start,
+ u32 *p_start,
u8 num_entries,
enum qe_risc_allocation risc,
int skip_page_for_first_entry)
int snum;
for (i = 0; i < num_entries; i++) {
+ u32 val = *p_start;
+
/* Check that this entry was actually valid --
needed in case failed in allocations */
- if ((*p_start & ENET_INIT_PARAM_RISC_MASK) == risc) {
+ if ((val & ENET_INIT_PARAM_RISC_MASK) == risc) {
snum =
- (u32) (*p_start & ENET_INIT_PARAM_SNUM_MASK) >>
+ (u32) (val & ENET_INIT_PARAM_SNUM_MASK) >>
ENET_INIT_PARAM_SNUM_SHIFT;
qe_put_snum((u8) snum);
if (!((i == 0) && skip_page_for_first_entry)) {
/* First entry of Rx does not have page */
init_enet_offset =
- (in_be32(p_start) &
- ENET_INIT_PARAM_PTR_MASK);
+ (val & ENET_INIT_PARAM_PTR_MASK);
qe_muram_free(init_enet_offset);
}
- *(p_start++) = 0; /* Just for cosmetics */
+ *p_start++ = 0;
}
}
#ifdef DEBUG
static int dump_init_enet_entries(struct ucc_geth_private *ugeth,
- volatile u32 *p_start,
+ u32 __iomem *p_start,
u8 num_entries,
u32 thread_size,
enum qe_risc_allocation risc,
int snum;
for (i = 0; i < num_entries; i++) {
+ u32 val = in_be32(p_start);
+
/* Check that this entry was actually valid --
needed in case failed in allocations */
- if ((*p_start & ENET_INIT_PARAM_RISC_MASK) == risc) {
+ if ((val & ENET_INIT_PARAM_RISC_MASK) == risc) {
snum =
- (u32) (*p_start & ENET_INIT_PARAM_SNUM_MASK) >>
+ (u32) (val & ENET_INIT_PARAM_SNUM_MASK) >>
ENET_INIT_PARAM_SNUM_SHIFT;
qe_put_snum((u8) snum);
if (!((i == 0) && skip_page_for_first_entry)) {
static int hw_clear_addr_in_paddr(struct ucc_geth_private *ugeth, u8 paddr_num)
{
- struct ucc_geth_82xx_address_filtering_pram *p_82xx_addr_filt;
+ struct ucc_geth_82xx_address_filtering_pram __iomem *p_82xx_addr_filt;
if (!(paddr_num < NUM_OF_PADDRS)) {
ugeth_warn("%s: Illagel paddr_num.", __FUNCTION__);
}
p_82xx_addr_filt =
- (struct ucc_geth_82xx_address_filtering_pram *) ugeth->p_rx_glbl_pram->
+ (struct ucc_geth_82xx_address_filtering_pram __iomem *) ugeth->p_rx_glbl_pram->
addressfiltering;
/* Writing address ff.ff.ff.ff.ff.ff disables address
static void hw_add_addr_in_hash(struct ucc_geth_private *ugeth,
u8 *p_enet_addr)
{
- struct ucc_geth_82xx_address_filtering_pram *p_82xx_addr_filt;
+ struct ucc_geth_82xx_address_filtering_pram __iomem *p_82xx_addr_filt;
u32 cecr_subblock;
p_82xx_addr_filt =
- (struct ucc_geth_82xx_address_filtering_pram *) ugeth->p_rx_glbl_pram->
+ (struct ucc_geth_82xx_address_filtering_pram __iomem *) ugeth->p_rx_glbl_pram->
addressfiltering;
cecr_subblock =
static void magic_packet_detection_enable(struct ucc_geth_private *ugeth)
{
struct ucc_fast_private *uccf;
- struct ucc_geth *ug_regs;
+ struct ucc_geth __iomem *ug_regs;
u32 maccfg2, uccm;
uccf = ugeth->uccf;
static void magic_packet_detection_disable(struct ucc_geth_private *ugeth)
{
struct ucc_fast_private *uccf;
- struct ucc_geth *ug_regs;
+ struct ucc_geth __iomem *ug_regs;
u32 maccfg2, uccm;
uccf = ugeth->uccf;
rx_firmware_statistics,
struct ucc_geth_hardware_statistics *hardware_statistics)
{
- struct ucc_fast *uf_regs;
- struct ucc_geth *ug_regs;
+ struct ucc_fast __iomem *uf_regs;
+ struct ucc_geth __iomem *ug_regs;
struct ucc_geth_tx_firmware_statistics_pram *p_tx_fw_statistics_pram;
struct ucc_geth_rx_firmware_statistics_pram *p_rx_fw_statistics_pram;
ug_regs = ugeth->ug_regs;
- uf_regs = (struct ucc_fast *) ug_regs;
+ uf_regs = (struct ucc_fast __iomem *) ug_regs;
p_tx_fw_statistics_pram = ugeth->p_tx_fw_statistics_pram;
p_rx_fw_statistics_pram = ugeth->p_rx_fw_statistics_pram;
}
#endif /* DEBUG */
-static void init_default_reg_vals(volatile u32 *upsmr_register,
- volatile u32 *maccfg1_register,
- volatile u32 *maccfg2_register)
+static void init_default_reg_vals(u32 __iomem *upsmr_register,
+ u32 __iomem *maccfg1_register,
+ u32 __iomem *maccfg2_register)
{
out_be32(upsmr_register, UCC_GETH_UPSMR_INIT);
out_be32(maccfg1_register, UCC_GETH_MACCFG1_INIT);
u8 alt_beb_truncation,
u8 max_retransmissions,
u8 collision_window,
- volatile u32 *hafdup_register)
+ u32 __iomem *hafdup_register)
{
u32 value = 0;
u8 non_btb_ipg,
u8 min_ifg,
u8 btb_ipg,
- volatile u32 *ipgifg_register)
+ u32 __iomem *ipgifg_register)
{
u32 value = 0;
int tx_flow_control_enable,
u16 pause_period,
u16 extension_field,
- volatile u32 *upsmr_register,
- volatile u32 *uempr_register,
- volatile u32 *maccfg1_register)
+ u32 __iomem *upsmr_register,
+ u32 __iomem *uempr_register,
+ u32 __iomem *maccfg1_register)
{
u32 value = 0;
static int init_hw_statistics_gathering_mode(int enable_hardware_statistics,
int auto_zero_hardware_statistics,
- volatile u32 *upsmr_register,
- volatile u16 *uescr_register)
+ u32 __iomem *upsmr_register,
+ u16 __iomem *uescr_register)
{
u32 upsmr_value = 0;
u16 uescr_value = 0;
static int init_firmware_statistics_gathering_mode(int
enable_tx_firmware_statistics,
int enable_rx_firmware_statistics,
- volatile u32 *tx_rmon_base_ptr,
+ u32 __iomem *tx_rmon_base_ptr,
u32 tx_firmware_statistics_structure_address,
- volatile u32 *rx_rmon_base_ptr,
+ u32 __iomem *rx_rmon_base_ptr,
u32 rx_firmware_statistics_structure_address,
- volatile u16 *temoder_register,
- volatile u32 *remoder_register)
+ u16 __iomem *temoder_register,
+ u32 __iomem *remoder_register)
{
/* Note: this function does not check if */
/* the parameters it receives are NULL */
u8 address_byte_3,
u8 address_byte_4,
u8 address_byte_5,
- volatile u32 *macstnaddr1_register,
- volatile u32 *macstnaddr2_register)
+ u32 __iomem *macstnaddr1_register,
+ u32 __iomem *macstnaddr2_register)
{
u32 value = 0;
}
static int init_check_frame_length_mode(int length_check,
- volatile u32 *maccfg2_register)
+ u32 __iomem *maccfg2_register)
{
u32 value = 0;
}
static int init_preamble_length(u8 preamble_length,
- volatile u32 *maccfg2_register)
+ u32 __iomem *maccfg2_register)
{
u32 value = 0;
static int init_rx_parameters(int reject_broadcast,
int receive_short_frames,
- int promiscuous, volatile u32 *upsmr_register)
+ int promiscuous, u32 __iomem *upsmr_register)
{
u32 value = 0;
}
static int init_max_rx_buff_len(u16 max_rx_buf_len,
- volatile u16 *mrblr_register)
+ u16 __iomem *mrblr_register)
{
/* max_rx_buf_len value must be a multiple of 128 */
if ((max_rx_buf_len == 0)
}
static int init_min_frame_len(u16 min_frame_length,
- volatile u16 *minflr_register,
- volatile u16 *mrblr_register)
+ u16 __iomem *minflr_register,
+ u16 __iomem *mrblr_register)
{
u16 mrblr_value = 0;
static int adjust_enet_interface(struct ucc_geth_private *ugeth)
{
struct ucc_geth_info *ug_info;
- struct ucc_geth *ug_regs;
- struct ucc_fast *uf_regs;
+ struct ucc_geth __iomem *ug_regs;
+ struct ucc_fast __iomem *uf_regs;
int ret_val;
u32 upsmr, maccfg2, tbiBaseAddress;
u16 value;
static void adjust_link(struct net_device *dev)
{
struct ucc_geth_private *ugeth = netdev_priv(dev);
- struct ucc_geth *ug_regs;
- struct ucc_fast *uf_regs;
+ struct ucc_geth __iomem *ug_regs;
+ struct ucc_fast __iomem *uf_regs;
struct phy_device *phydev = ugeth->phydev;
unsigned long flags;
int new_state = 0;
uccf = ugeth->uccf;
/* Clear acknowledge bit */
- temp = ugeth->p_rx_glbl_pram->rxgstpack;
+ temp = in_8(&ugeth->p_rx_glbl_pram->rxgstpack);
temp &= ~GRACEFUL_STOP_ACKNOWLEDGE_RX;
- ugeth->p_rx_glbl_pram->rxgstpack = temp;
+ out_8(&ugeth->p_rx_glbl_pram->rxgstpack, temp);
/* Keep issuing command and checking acknowledge bit until
it is asserted, according to spec */
qe_issue_cmd(QE_GRACEFUL_STOP_RX, cecr_subblock,
QE_CR_PROTOCOL_ETHERNET, 0);
- temp = ugeth->p_rx_glbl_pram->rxgstpack;
+ temp = in_8(&ugeth->p_rx_glbl_pram->rxgstpack);
} while (!(temp & GRACEFUL_STOP_ACKNOWLEDGE_RX));
uccf->stopped_rx = 1;
enum enet_addr_type
enet_addr_type)
{
- struct ucc_geth_82xx_address_filtering_pram *p_82xx_addr_filt;
+ struct ucc_geth_82xx_address_filtering_pram __iomem *p_82xx_addr_filt;
struct ucc_fast_private *uccf;
enum comm_dir comm_dir;
struct list_head *p_lh;
u16 i, num;
- u32 *addr_h, *addr_l;
+ u32 __iomem *addr_h;
+ u32 __iomem *addr_l;
u8 *p_counter;
uccf = ugeth->uccf;
p_82xx_addr_filt =
- (struct ucc_geth_82xx_address_filtering_pram *) ugeth->p_rx_glbl_pram->
- addressfiltering;
+ (struct ucc_geth_82xx_address_filtering_pram __iomem *)
+ ugeth->p_rx_glbl_pram->addressfiltering;
if (enet_addr_type == ENET_ADDR_TYPE_GROUP) {
addr_h = &(p_82xx_addr_filt->gaddr_h);
static void ucc_geth_memclean(struct ucc_geth_private *ugeth)
{
u16 i, j;
- u8 *bd;
+ u8 __iomem *bd;
if (!ugeth)
return;
for (j = 0; j < ugeth->ug_info->bdRingLenTx[i]; j++) {
if (ugeth->tx_skbuff[i][j]) {
dma_unmap_single(NULL,
- ((struct qe_bd *)bd)->buf,
- (in_be32((u32 *)bd) &
+ in_be32(&((struct qe_bd __iomem *)bd)->buf),
+ (in_be32((u32 __iomem *)bd) &
BD_LENGTH_MASK),
DMA_TO_DEVICE);
dev_kfree_skb_any(ugeth->tx_skbuff[i][j]);
for (j = 0; j < ugeth->ug_info->bdRingLenRx[i]; j++) {
if (ugeth->rx_skbuff[i][j]) {
dma_unmap_single(NULL,
- ((struct qe_bd *)bd)->buf,
+ in_be32(&((struct qe_bd __iomem *)bd)->buf),
ugeth->ug_info->
uf_info.max_rx_buf_length +
UCC_GETH_RX_DATA_BUF_ALIGNMENT,
{
struct ucc_geth_private *ugeth;
struct dev_mc_list *dmi;
- struct ucc_fast *uf_regs;
- struct ucc_geth_82xx_address_filtering_pram *p_82xx_addr_filt;
+ struct ucc_fast __iomem *uf_regs;
+ struct ucc_geth_82xx_address_filtering_pram __iomem *p_82xx_addr_filt;
int i;
ugeth = netdev_priv(dev);
if (dev->flags & IFF_PROMISC) {
- uf_regs->upsmr |= UPSMR_PRO;
+ out_be32(&uf_regs->upsmr, in_be32(&uf_regs->upsmr) | UPSMR_PRO);
} else {
- uf_regs->upsmr &= ~UPSMR_PRO;
+ out_be32(&uf_regs->upsmr, in_be32(&uf_regs->upsmr)&~UPSMR_PRO);
p_82xx_addr_filt =
- (struct ucc_geth_82xx_address_filtering_pram *) ugeth->
+ (struct ucc_geth_82xx_address_filtering_pram __iomem *) ugeth->
p_rx_glbl_pram->addressfiltering;
if (dev->flags & IFF_ALLMULTI) {
static void ucc_geth_stop(struct ucc_geth_private *ugeth)
{
- struct ucc_geth *ug_regs = ugeth->ug_regs;
+ struct ucc_geth __iomem *ug_regs = ugeth->ug_regs;
struct phy_device *phydev = ugeth->phydev;
u32 tempval;
return -ENOMEM;
}
- ugeth->ug_regs = (struct ucc_geth *) ioremap(uf_info->regs, sizeof(struct ucc_geth));
+ ugeth->ug_regs = (struct ucc_geth __iomem *) ioremap(uf_info->regs, sizeof(struct ucc_geth));
return 0;
}
static int ucc_geth_startup(struct ucc_geth_private *ugeth)
{
- struct ucc_geth_82xx_address_filtering_pram *p_82xx_addr_filt;
- struct ucc_geth_init_pram *p_init_enet_pram;
+ struct ucc_geth_82xx_address_filtering_pram __iomem *p_82xx_addr_filt;
+ struct ucc_geth_init_pram __iomem *p_init_enet_pram;
struct ucc_fast_private *uccf;
struct ucc_geth_info *ug_info;
struct ucc_fast_info *uf_info;
- struct ucc_fast *uf_regs;
- struct ucc_geth *ug_regs;
+ struct ucc_fast __iomem *uf_regs;
+ struct ucc_geth __iomem *ug_regs;
int ret_val = -EINVAL;
u32 remoder = UCC_GETH_REMODER_INIT;
u32 init_enet_pram_offset, cecr_subblock, command, maccfg1;
u16 temoder = UCC_GETH_TEMODER_INIT;
u16 test;
u8 function_code = 0;
- u8 *bd, *endOfRing;
+ u8 __iomem *bd;
+ u8 __iomem *endOfRing;
u8 numThreadsRxNumerical, numThreadsTxNumerical;
ugeth_vdbg("%s: IN", __FUNCTION__);
if (UCC_GETH_TX_BD_RING_ALIGNMENT > 4)
align = UCC_GETH_TX_BD_RING_ALIGNMENT;
ugeth->tx_bd_ring_offset[j] =
- kmalloc((u32) (length + align), GFP_KERNEL);
+ (u32) kmalloc((u32) (length + align), GFP_KERNEL);
if (ugeth->tx_bd_ring_offset[j] != 0)
ugeth->p_tx_bd_ring[j] =
- (void*)((ugeth->tx_bd_ring_offset[j] +
+ (u8 __iomem *)((ugeth->tx_bd_ring_offset[j] +
align) & ~(align - 1));
} else if (uf_info->bd_mem_part == MEM_PART_MURAM) {
ugeth->tx_bd_ring_offset[j] =
UCC_GETH_TX_BD_RING_ALIGNMENT);
if (!IS_ERR_VALUE(ugeth->tx_bd_ring_offset[j]))
ugeth->p_tx_bd_ring[j] =
- (u8 *) qe_muram_addr(ugeth->
+ (u8 __iomem *) qe_muram_addr(ugeth->
tx_bd_ring_offset[j]);
}
if (!ugeth->p_tx_bd_ring[j]) {
return -ENOMEM;
}
/* Zero unused end of bd ring, according to spec */
- memset(ugeth->p_tx_bd_ring[j] +
- ug_info->bdRingLenTx[j] * sizeof(struct qe_bd), 0,
+ memset_io((void __iomem *)(ugeth->p_tx_bd_ring[j] +
+ ug_info->bdRingLenTx[j] * sizeof(struct qe_bd)), 0,
length - ug_info->bdRingLenTx[j] * sizeof(struct qe_bd));
}
if (UCC_GETH_RX_BD_RING_ALIGNMENT > 4)
align = UCC_GETH_RX_BD_RING_ALIGNMENT;
ugeth->rx_bd_ring_offset[j] =
- kmalloc((u32) (length + align), GFP_KERNEL);
+ (u32) kmalloc((u32) (length + align), GFP_KERNEL);
if (ugeth->rx_bd_ring_offset[j] != 0)
ugeth->p_rx_bd_ring[j] =
- (void*)((ugeth->rx_bd_ring_offset[j] +
+ (u8 __iomem *)((ugeth->rx_bd_ring_offset[j] +
align) & ~(align - 1));
} else if (uf_info->bd_mem_part == MEM_PART_MURAM) {
ugeth->rx_bd_ring_offset[j] =
UCC_GETH_RX_BD_RING_ALIGNMENT);
if (!IS_ERR_VALUE(ugeth->rx_bd_ring_offset[j]))
ugeth->p_rx_bd_ring[j] =
- (u8 *) qe_muram_addr(ugeth->
+ (u8 __iomem *) qe_muram_addr(ugeth->
rx_bd_ring_offset[j]);
}
if (!ugeth->p_rx_bd_ring[j]) {
bd = ugeth->confBd[j] = ugeth->txBd[j] = ugeth->p_tx_bd_ring[j];
for (i = 0; i < ug_info->bdRingLenTx[j]; i++) {
/* clear bd buffer */
- out_be32(&((struct qe_bd *)bd)->buf, 0);
+ out_be32(&((struct qe_bd __iomem *)bd)->buf, 0);
/* set bd status and length */
- out_be32((u32 *)bd, 0);
+ out_be32((u32 __iomem *)bd, 0);
bd += sizeof(struct qe_bd);
}
bd -= sizeof(struct qe_bd);
/* set bd status and length */
- out_be32((u32 *)bd, T_W); /* for last BD set Wrap bit */
+ out_be32((u32 __iomem *)bd, T_W); /* for last BD set Wrap bit */
}
/* Init Rx bds */
bd = ugeth->rxBd[j] = ugeth->p_rx_bd_ring[j];
for (i = 0; i < ug_info->bdRingLenRx[j]; i++) {
/* set bd status and length */
- out_be32((u32 *)bd, R_I);
+ out_be32((u32 __iomem *)bd, R_I);
/* clear bd buffer */
- out_be32(&((struct qe_bd *)bd)->buf, 0);
+ out_be32(&((struct qe_bd __iomem *)bd)->buf, 0);
bd += sizeof(struct qe_bd);
}
bd -= sizeof(struct qe_bd);
/* set bd status and length */
- out_be32((u32 *)bd, R_W); /* for last BD set Wrap bit */
+ out_be32((u32 __iomem *)bd, R_W); /* for last BD set Wrap bit */
}
/*
return -ENOMEM;
}
ugeth->p_tx_glbl_pram =
- (struct ucc_geth_tx_global_pram *) qe_muram_addr(ugeth->
+ (struct ucc_geth_tx_global_pram __iomem *) qe_muram_addr(ugeth->
tx_glbl_pram_offset);
/* Zero out p_tx_glbl_pram */
- memset(ugeth->p_tx_glbl_pram, 0, sizeof(struct ucc_geth_tx_global_pram));
+ memset_io((void __iomem *)ugeth->p_tx_glbl_pram, 0, sizeof(struct ucc_geth_tx_global_pram));
/* Fill global PRAM */
}
ugeth->p_thread_data_tx =
- (struct ucc_geth_thread_data_tx *) qe_muram_addr(ugeth->
+ (struct ucc_geth_thread_data_tx __iomem *) qe_muram_addr(ugeth->
thread_dat_tx_offset);
out_be32(&ugeth->p_tx_glbl_pram->tqptr, ugeth->thread_dat_tx_offset);
/* iphoffset */
for (i = 0; i < TX_IP_OFFSET_ENTRY_MAX; i++)
- ugeth->p_tx_glbl_pram->iphoffset[i] = ug_info->iphoffset[i];
+ out_8(&ugeth->p_tx_glbl_pram->iphoffset[i],
+ ug_info->iphoffset[i]);
/* SQPTR */
/* Size varies with number of Tx queues */
}
ugeth->p_send_q_mem_reg =
- (struct ucc_geth_send_queue_mem_region *) qe_muram_addr(ugeth->
+ (struct ucc_geth_send_queue_mem_region __iomem *) qe_muram_addr(ugeth->
send_q_mem_reg_offset);
out_be32(&ugeth->p_tx_glbl_pram->sqptr, ugeth->send_q_mem_reg_offset);
}
ugeth->p_scheduler =
- (struct ucc_geth_scheduler *) qe_muram_addr(ugeth->
+ (struct ucc_geth_scheduler __iomem *) qe_muram_addr(ugeth->
scheduler_offset);
out_be32(&ugeth->p_tx_glbl_pram->schedulerbasepointer,
ugeth->scheduler_offset);
/* Zero out p_scheduler */
- memset(ugeth->p_scheduler, 0, sizeof(struct ucc_geth_scheduler));
+ memset_io((void __iomem *)ugeth->p_scheduler, 0, sizeof(struct ucc_geth_scheduler));
/* Set values in scheduler */
out_be32(&ugeth->p_scheduler->mblinterval,
ug_info->mblinterval);
out_be16(&ugeth->p_scheduler->nortsrbytetime,
ug_info->nortsrbytetime);
- ugeth->p_scheduler->fracsiz = ug_info->fracsiz;
- ugeth->p_scheduler->strictpriorityq = ug_info->strictpriorityq;
- ugeth->p_scheduler->txasap = ug_info->txasap;
- ugeth->p_scheduler->extrabw = ug_info->extrabw;
+ out_8(&ugeth->p_scheduler->fracsiz, ug_info->fracsiz);
+ out_8(&ugeth->p_scheduler->strictpriorityq,
+ ug_info->strictpriorityq);
+ out_8(&ugeth->p_scheduler->txasap, ug_info->txasap);
+ out_8(&ugeth->p_scheduler->extrabw, ug_info->extrabw);
for (i = 0; i < NUM_TX_QUEUES; i++)
- ugeth->p_scheduler->weightfactor[i] =
- ug_info->weightfactor[i];
+ out_8(&ugeth->p_scheduler->weightfactor[i],
+ ug_info->weightfactor[i]);
/* Set pointers to cpucount registers in scheduler */
ugeth->p_cpucount[0] = &(ugeth->p_scheduler->cpucount0);
return -ENOMEM;
}
ugeth->p_tx_fw_statistics_pram =
- (struct ucc_geth_tx_firmware_statistics_pram *)
+ (struct ucc_geth_tx_firmware_statistics_pram __iomem *)
qe_muram_addr(ugeth->tx_fw_statistics_pram_offset);
/* Zero out p_tx_fw_statistics_pram */
- memset(ugeth->p_tx_fw_statistics_pram,
+ memset_io((void __iomem *)ugeth->p_tx_fw_statistics_pram,
0, sizeof(struct ucc_geth_tx_firmware_statistics_pram));
}
return -ENOMEM;
}
ugeth->p_rx_glbl_pram =
- (struct ucc_geth_rx_global_pram *) qe_muram_addr(ugeth->
+ (struct ucc_geth_rx_global_pram __iomem *) qe_muram_addr(ugeth->
rx_glbl_pram_offset);
/* Zero out p_rx_glbl_pram */
- memset(ugeth->p_rx_glbl_pram, 0, sizeof(struct ucc_geth_rx_global_pram));
+ memset_io((void __iomem *)ugeth->p_rx_glbl_pram, 0, sizeof(struct ucc_geth_rx_global_pram));
/* Fill global PRAM */
}
ugeth->p_thread_data_rx =
- (struct ucc_geth_thread_data_rx *) qe_muram_addr(ugeth->
+ (struct ucc_geth_thread_data_rx __iomem *) qe_muram_addr(ugeth->
thread_dat_rx_offset);
out_be32(&ugeth->p_rx_glbl_pram->rqptr, ugeth->thread_dat_rx_offset);
return -ENOMEM;
}
ugeth->p_rx_fw_statistics_pram =
- (struct ucc_geth_rx_firmware_statistics_pram *)
+ (struct ucc_geth_rx_firmware_statistics_pram __iomem *)
qe_muram_addr(ugeth->rx_fw_statistics_pram_offset);
/* Zero out p_rx_fw_statistics_pram */
- memset(ugeth->p_rx_fw_statistics_pram, 0,
+ memset_io((void __iomem *)ugeth->p_rx_fw_statistics_pram, 0,
sizeof(struct ucc_geth_rx_firmware_statistics_pram));
}
}
ugeth->p_rx_irq_coalescing_tbl =
- (struct ucc_geth_rx_interrupt_coalescing_table *)
+ (struct ucc_geth_rx_interrupt_coalescing_table __iomem *)
qe_muram_addr(ugeth->rx_irq_coalescing_tbl_offset);
out_be32(&ugeth->p_rx_glbl_pram->intcoalescingptr,
ugeth->rx_irq_coalescing_tbl_offset);
}
ugeth->p_rx_bd_qs_tbl =
- (struct ucc_geth_rx_bd_queues_entry *) qe_muram_addr(ugeth->
+ (struct ucc_geth_rx_bd_queues_entry __iomem *) qe_muram_addr(ugeth->
rx_bd_qs_tbl_offset);
out_be32(&ugeth->p_rx_glbl_pram->rbdqptr, ugeth->rx_bd_qs_tbl_offset);
/* Zero out p_rx_bd_qs_tbl */
- memset(ugeth->p_rx_bd_qs_tbl,
+ memset_io((void __iomem *)ugeth->p_rx_bd_qs_tbl,
0,
ug_info->numQueuesRx * (sizeof(struct ucc_geth_rx_bd_queues_entry) +
sizeof(struct ucc_geth_rx_prefetched_bds)));
&ugeth->p_rx_glbl_pram->remoder);
/* function code register */
- ugeth->p_rx_glbl_pram->rstate = function_code;
+ out_8(&ugeth->p_rx_glbl_pram->rstate, function_code);
/* initialize extended filtering */
if (ug_info->rxExtendedFiltering) {
}
ugeth->p_exf_glbl_param =
- (struct ucc_geth_exf_global_pram *) qe_muram_addr(ugeth->
+ (struct ucc_geth_exf_global_pram __iomem *) qe_muram_addr(ugeth->
exf_glbl_param_offset);
out_be32(&ugeth->p_rx_glbl_pram->exfGlobalParam,
ugeth->exf_glbl_param_offset);
ugeth_82xx_filtering_clear_addr_in_paddr(ugeth, (u8) j);
p_82xx_addr_filt =
- (struct ucc_geth_82xx_address_filtering_pram *) ugeth->
+ (struct ucc_geth_82xx_address_filtering_pram __iomem *) ugeth->
p_rx_glbl_pram->addressfiltering;
ugeth_82xx_filtering_clear_all_addr_in_hash(ugeth,
return -ENOMEM;
}
p_init_enet_pram =
- (struct ucc_geth_init_pram *) qe_muram_addr(init_enet_pram_offset);
+ (struct ucc_geth_init_pram __iomem *) qe_muram_addr(init_enet_pram_offset);
/* Copy shadow InitEnet command parameter structure into PRAM */
- p_init_enet_pram->resinit1 = ugeth->p_init_enet_param_shadow->resinit1;
- p_init_enet_pram->resinit2 = ugeth->p_init_enet_param_shadow->resinit2;
- p_init_enet_pram->resinit3 = ugeth->p_init_enet_param_shadow->resinit3;
- p_init_enet_pram->resinit4 = ugeth->p_init_enet_param_shadow->resinit4;
+ out_8(&p_init_enet_pram->resinit1,
+ ugeth->p_init_enet_param_shadow->resinit1);
+ out_8(&p_init_enet_pram->resinit2,
+ ugeth->p_init_enet_param_shadow->resinit2);
+ out_8(&p_init_enet_pram->resinit3,
+ ugeth->p_init_enet_param_shadow->resinit3);
+ out_8(&p_init_enet_pram->resinit4,
+ ugeth->p_init_enet_param_shadow->resinit4);
out_be16(&p_init_enet_pram->resinit5,
ugeth->p_init_enet_param_shadow->resinit5);
- p_init_enet_pram->largestexternallookupkeysize =
- ugeth->p_init_enet_param_shadow->largestexternallookupkeysize;
+ out_8(&p_init_enet_pram->largestexternallookupkeysize,
+ ugeth->p_init_enet_param_shadow->largestexternallookupkeysize);
out_be32(&p_init_enet_pram->rgftgfrxglobal,
ugeth->p_init_enet_param_shadow->rgftgfrxglobal);
for (i = 0; i < ENET_INIT_PARAM_MAX_ENTRIES_RX; i++)
#ifdef CONFIG_UGETH_TX_ON_DEMAND
struct ucc_fast_private *uccf;
#endif
- u8 *bd; /* BD pointer */
+ u8 __iomem *bd; /* BD pointer */
u32 bd_status;
u8 txQ = 0;
/* Start from the next BD that should be filled */
bd = ugeth->txBd[txQ];
- bd_status = in_be32((u32 *)bd);
+ bd_status = in_be32((u32 __iomem *)bd);
/* Save the skb pointer so we can free it later */
ugeth->tx_skbuff[txQ][ugeth->skb_curtx[txQ]] = skb;
1) & TX_RING_MOD_MASK(ugeth->ug_info->bdRingLenTx[txQ]);
/* set up the buffer descriptor */
- out_be32(&((struct qe_bd *)bd)->buf,
+ out_be32(&((struct qe_bd __iomem *)bd)->buf,
dma_map_single(NULL, skb->data, skb->len, DMA_TO_DEVICE));
/* printk(KERN_DEBUG"skb->data is 0x%x\n",skb->data); */
bd_status = (bd_status & T_W) | T_R | T_I | T_L | skb->len;
/* set bd status and length */
- out_be32((u32 *)bd, bd_status);
+ out_be32((u32 __iomem *)bd, bd_status);
dev->trans_start = jiffies;
static int ucc_geth_rx(struct ucc_geth_private *ugeth, u8 rxQ, int rx_work_limit)
{
struct sk_buff *skb;
- u8 *bd;
+ u8 __iomem *bd;
u16 length, howmany = 0;
u32 bd_status;
u8 *bdBuffer;
/* collect received buffers */
bd = ugeth->rxBd[rxQ];
- bd_status = in_be32((u32 *)bd);
+ bd_status = in_be32((u32 __iomem *)bd);
/* while there are received buffers and BD is full (~R_E) */
while (!((bd_status & (R_E)) || (--rx_work_limit < 0))) {
- bdBuffer = (u8 *) in_be32(&((struct qe_bd *)bd)->buf);
+ bdBuffer = (u8 *) in_be32(&((struct qe_bd __iomem *)bd)->buf);
length = (u16) ((bd_status & BD_LENGTH_MASK) - 4);
skb = ugeth->rx_skbuff[rxQ][ugeth->skb_currx[rxQ]];
else
bd += sizeof(struct qe_bd);
- bd_status = in_be32((u32 *)bd);
+ bd_status = in_be32((u32 __iomem *)bd);
}
ugeth->rxBd[rxQ] = bd;
{
/* Start from the next BD that should be filled */
struct ucc_geth_private *ugeth = netdev_priv(dev);
- u8 *bd; /* BD pointer */
+ u8 __iomem *bd; /* BD pointer */
u32 bd_status;
bd = ugeth->confBd[txQ];
- bd_status = in_be32((u32 *)bd);
+ bd_status = in_be32((u32 __iomem *)bd);
/* Normal processing. */
while ((bd_status & T_R) == 0) {
bd += sizeof(struct qe_bd);
else
bd = ugeth->p_tx_bd_ring[txQ];
- bd_status = in_be32((u32 *)bd);
+ bd_status = in_be32((u32 __iomem *)bd);
}
ugeth->confBd[txQ] = bd;
return 0;
return -EINVAL;
}
} else {
- prop = of_get_property(np, "rx-clock", NULL);
+ prop = of_get_property(np, "tx-clock", NULL);
if (!prop) {
printk(KERN_ERR
"ucc_geth: mising tx-clock-name property\n");
u32 iaddr_l; /* individual address filter, low */
u32 gaddr_h; /* group address filter, high */
u32 gaddr_l; /* group address filter, low */
- struct ucc_geth_82xx_enet_address taddr;
- struct ucc_geth_82xx_enet_address paddr[NUM_OF_PADDRS];
+ struct ucc_geth_82xx_enet_address __iomem taddr;
+ struct ucc_geth_82xx_enet_address __iomem paddr[NUM_OF_PADDRS];
u8 res0[0x40 - 0x38];
} __attribute__ ((packed));
struct ucc_fast_private *uccf;
struct net_device *dev;
struct napi_struct napi;
- struct ucc_geth *ug_regs;
+ struct ucc_geth __iomem *ug_regs;
struct ucc_geth_init_pram *p_init_enet_param_shadow;
- struct ucc_geth_exf_global_pram *p_exf_glbl_param;
+ struct ucc_geth_exf_global_pram __iomem *p_exf_glbl_param;
u32 exf_glbl_param_offset;
- struct ucc_geth_rx_global_pram *p_rx_glbl_pram;
+ struct ucc_geth_rx_global_pram __iomem *p_rx_glbl_pram;
u32 rx_glbl_pram_offset;
- struct ucc_geth_tx_global_pram *p_tx_glbl_pram;
+ struct ucc_geth_tx_global_pram __iomem *p_tx_glbl_pram;
u32 tx_glbl_pram_offset;
- struct ucc_geth_send_queue_mem_region *p_send_q_mem_reg;
+ struct ucc_geth_send_queue_mem_region __iomem *p_send_q_mem_reg;
u32 send_q_mem_reg_offset;
- struct ucc_geth_thread_data_tx *p_thread_data_tx;
+ struct ucc_geth_thread_data_tx __iomem *p_thread_data_tx;
u32 thread_dat_tx_offset;
- struct ucc_geth_thread_data_rx *p_thread_data_rx;
+ struct ucc_geth_thread_data_rx __iomem *p_thread_data_rx;
u32 thread_dat_rx_offset;
- struct ucc_geth_scheduler *p_scheduler;
+ struct ucc_geth_scheduler __iomem *p_scheduler;
u32 scheduler_offset;
- struct ucc_geth_tx_firmware_statistics_pram *p_tx_fw_statistics_pram;
+ struct ucc_geth_tx_firmware_statistics_pram __iomem *p_tx_fw_statistics_pram;
u32 tx_fw_statistics_pram_offset;
- struct ucc_geth_rx_firmware_statistics_pram *p_rx_fw_statistics_pram;
+ struct ucc_geth_rx_firmware_statistics_pram __iomem *p_rx_fw_statistics_pram;
u32 rx_fw_statistics_pram_offset;
- struct ucc_geth_rx_interrupt_coalescing_table *p_rx_irq_coalescing_tbl;
+ struct ucc_geth_rx_interrupt_coalescing_table __iomem *p_rx_irq_coalescing_tbl;
u32 rx_irq_coalescing_tbl_offset;
- struct ucc_geth_rx_bd_queues_entry *p_rx_bd_qs_tbl;
+ struct ucc_geth_rx_bd_queues_entry __iomem *p_rx_bd_qs_tbl;
u32 rx_bd_qs_tbl_offset;
- u8 *p_tx_bd_ring[NUM_TX_QUEUES];
+ u8 __iomem *p_tx_bd_ring[NUM_TX_QUEUES];
u32 tx_bd_ring_offset[NUM_TX_QUEUES];
- u8 *p_rx_bd_ring[NUM_RX_QUEUES];
+ u8 __iomem *p_rx_bd_ring[NUM_RX_QUEUES];
u32 rx_bd_ring_offset[NUM_RX_QUEUES];
- u8 *confBd[NUM_TX_QUEUES];
- u8 *txBd[NUM_TX_QUEUES];
- u8 *rxBd[NUM_RX_QUEUES];
+ u8 __iomem *confBd[NUM_TX_QUEUES];
+ u8 __iomem *txBd[NUM_TX_QUEUES];
+ u8 __iomem *rxBd[NUM_RX_QUEUES];
int badFrame[NUM_RX_QUEUES];
u16 cpucount[NUM_TX_QUEUES];
- volatile u16 *p_cpucount[NUM_TX_QUEUES];
+ u16 __iomem *p_cpucount[NUM_TX_QUEUES];
int indAddrRegUsed[NUM_OF_PADDRS];
u8 paddr[NUM_OF_PADDRS][ENET_NUM_OCTETS_PER_ADDRESS]; /* ethernet address */
u8 numGroupAddrInHash;
int oldlink;
};
+void uec_set_ethtool_ops(struct net_device *netdev);
+int init_flow_control_params(u32 automatic_flow_control_mode,
+ int rx_flow_control_enable, int tx_flow_control_enable,
+ u16 pause_period, u16 extension_field,
+ u32 __iomem *upsmr_register, u32 __iomem *uempr_register,
+ u32 __iomem *maccfg1_register);
+
+
#endif /* __UCC_GETH_H__ */
#define UEC_TX_FW_STATS_LEN ARRAY_SIZE(tx_fw_stat_gstrings)
#define UEC_RX_FW_STATS_LEN ARRAY_SIZE(rx_fw_stat_gstrings)
-extern int init_flow_control_params(u32 automatic_flow_control_mode,
- int rx_flow_control_enable,
- int tx_flow_control_enable, u16 pause_period,
- u16 extension_field, volatile u32 *upsmr_register,
- volatile u32 *uempr_register, volatile u32 *maccfg1_register);
-
static int
uec_get_settings(struct net_device *netdev, struct ethtool_cmd *ecmd)
{
}
/* Reset the MIIM registers, and wait for the bus to free */
-int uec_mdio_reset(struct mii_bus *bus)
+static int uec_mdio_reset(struct mii_bus *bus)
{
struct ucc_mii_mng __iomem *regs = (void __iomem *)bus->priv;
unsigned int timeout = PHY_INIT_TIMEOUT;
return err;
}
-int uec_mdio_remove(struct of_device *ofdev)
+static int uec_mdio_remove(struct of_device *ofdev)
{
struct device *device = &ofdev->dev;
struct mii_bus *bus = dev_get_drvdata(device);
// Buffalo LUA-U2-KTX
USB_DEVICE (0x0411, 0x003d),
.driver_info = (unsigned long) &ax8817x_info,
+}, {
+ // Buffalo LUA-U2-GT 10/100/1000
+ USB_DEVICE (0x0411, 0x006e),
+ .driver_info = (unsigned long) &ax88178_info,
}, {
// Sitecom LN-029 "USB 2.0 10/100 Ethernet adapter"
USB_DEVICE (0x6189, 0x182d),
static int ds1511_rtc_set_time(struct device *dev, struct rtc_time *rtc_tm)
{
u8 mon, day, dow, hrs, min, sec, yrs, cen;
- unsigned int flags;
+ unsigned long flags;
/*
* won't have to change this for a while
static int ds1511_rtc_read_time(struct device *dev, struct rtc_time *rtc_tm)
{
unsigned int century;
- unsigned int flags;
+ unsigned long flags;
spin_lock_irqsave(&ds1511_lock, flags);
rtc_disable_update();
/* This chip uses multiple addresses, use dummy devices for them */
for (i = 1; i < 8; ++i) {
s35390a->client[i] = i2c_new_dummy(client->adapter,
- client->addr + i, "rtc-s35390a");
+ client->addr + i);
if (!s35390a->client[i]) {
dev_err(&client->dev, "Address %02x unavailable\n",
client->addr + i);
goto err_badres;
}
- rtc->regbase = (void __iomem *)rtc->res->start;
+ rtc->regbase = ioremap_nocache(rtc->res->start, rtc->regsize);
if (unlikely(!rtc->regbase)) {
ret = -EINVAL;
goto err_badmap;
&sh_rtc_ops, THIS_MODULE);
if (IS_ERR(rtc->rtc_dev)) {
ret = PTR_ERR(rtc->rtc_dev);
- goto err_badmap;
+ goto err_unmap;
}
rtc->capabilities = RTC_DEF_CAPABILITIES;
dev_err(&pdev->dev,
"request period IRQ failed with %d, IRQ %d\n", ret,
rtc->periodic_irq);
- goto err_badmap;
+ goto err_unmap;
}
ret = request_irq(rtc->carry_irq, sh_rtc_interrupt, IRQF_DISABLED,
"request carry IRQ failed with %d, IRQ %d\n", ret,
rtc->carry_irq);
free_irq(rtc->periodic_irq, rtc);
- goto err_badmap;
+ goto err_unmap;
}
ret = request_irq(rtc->alarm_irq, sh_rtc_alarm, IRQF_DISABLED,
rtc->alarm_irq);
free_irq(rtc->carry_irq, rtc);
free_irq(rtc->periodic_irq, rtc);
- goto err_badmap;
+ goto err_unmap;
}
tmp = readb(rtc->regbase + RCR1);
return 0;
+err_unmap:
+ iounmap(rtc->regbase);
err_badmap:
release_resource(rtc->res);
err_badres:
release_resource(rtc->res);
+ iounmap(rtc->regbase);
+
platform_set_drvdata(pdev, NULL);
kfree(rtc);
* Insert character into the screen at the current position with the
* current color and highlight. This function does NOT do cursor movement.
*/
-static int
-tty3270_put_character(struct tty3270 *tp, char ch)
+static void tty3270_put_character(struct tty3270 *tp, char ch)
{
struct tty3270_line *line;
struct tty3270_cell *cell;
cell->character = tp->view.ascebc[(unsigned int) ch];
cell->highlight = tp->highlight;
cell->f_color = tp->f_color;
- return 1;
}
/*
/*
* Put single characters to the ttys character buffer
*/
-static void
-tty3270_put_char(struct tty_struct *tty, unsigned char ch)
+static int tty3270_put_char(struct tty_struct *tty, unsigned char ch)
{
struct tty3270 *tp;
tp = tty->driver_data;
- if (!tp)
- return;
- if (tp->char_count < TTY3270_CHAR_BUF_SIZE)
- tp->char_buf[tp->char_count++] = ch;
+ if (!tp || tp->char_count >= TTY3270_CHAR_BUF_SIZE)
+ return 0;
+ tp->char_buf[tp->char_count++] = ch;
+ return 1;
}
/*
#include <asm/cio.h>
#include <asm/uaccess.h>
+#include <asm/cio.h>
#include "blacklist.h"
#include "cio.h"
* Function: blacklist_range
* (Un-)blacklist the devices from-to
*/
-static void
-blacklist_range (range_action action, unsigned int from, unsigned int to,
- unsigned int ssid)
+static int blacklist_range(range_action action, unsigned int from_ssid,
+ unsigned int to_ssid, unsigned int from,
+ unsigned int to, int msgtrigger)
{
- if (!to)
- to = from;
-
- if (from > to || to > __MAX_SUBCHANNEL || ssid > __MAX_SSID) {
- printk (KERN_WARNING "cio: Invalid blacklist range "
- "0.%x.%04x to 0.%x.%04x, skipping\n",
- ssid, from, ssid, to);
- return;
+ if ((from_ssid > to_ssid) || ((from_ssid == to_ssid) && (from > to))) {
+ if (msgtrigger)
+ printk(KERN_WARNING "cio: Invalid cio_ignore range "
+ "0.%x.%04x-0.%x.%04x\n", from_ssid, from,
+ to_ssid, to);
+ return 1;
}
- for (; from <= to; from++) {
+
+ while ((from_ssid < to_ssid) || ((from_ssid == to_ssid) &&
+ (from <= to))) {
if (action == add)
- set_bit (from, bl_dev[ssid]);
+ set_bit(from, bl_dev[from_ssid]);
else
- clear_bit (from, bl_dev[ssid]);
+ clear_bit(from, bl_dev[from_ssid]);
+ from++;
+ if (from > __MAX_SUBCHANNEL) {
+ from_ssid++;
+ from = 0;
+ }
}
+
+ return 0;
}
-/*
- * Function: blacklist_busid
- * Get devno/busid from given string.
- * Shamelessly grabbed from dasd_devmap.c.
- */
-static int
-blacklist_busid(char **str, int *id0, int *ssid, int *devno)
+static int pure_hex(char **cp, unsigned int *val, int min_digit,
+ int max_digit, int max_val)
{
- int val, old_style;
- char *sav;
+ int diff;
+ unsigned int value;
- sav = *str;
+ diff = 0;
+ *val = 0;
- /* check for leading '0x' */
- old_style = 0;
- if ((*str)[0] == '0' && (*str)[1] == 'x') {
- *str += 2;
- old_style = 1;
- }
- if (!isxdigit((*str)[0])) /* We require at least one hex digit */
- goto confused;
- val = simple_strtoul(*str, str, 16);
- if (old_style || (*str)[0] != '.') {
- *id0 = *ssid = 0;
- if (val < 0 || val > 0xffff)
- goto confused;
- *devno = val;
- if ((*str)[0] != ',' && (*str)[0] != '-' &&
- (*str)[0] != '\n' && (*str)[0] != '\0')
- goto confused;
- return 0;
+ while (isxdigit(**cp) && (diff <= max_digit)) {
+
+ if (isdigit(**cp))
+ value = **cp - '0';
+ else
+ value = tolower(**cp) - 'a' + 10;
+ *val = *val * 16 + value;
+ (*cp)++;
+ diff++;
}
- /* New style x.y.z busid */
- if (val < 0 || val > 0xff)
- goto confused;
- *id0 = val;
- (*str)++;
- if (!isxdigit((*str)[0])) /* We require at least one hex digit */
- goto confused;
- val = simple_strtoul(*str, str, 16);
- if (val < 0 || val > 0xff || (*str)++[0] != '.')
- goto confused;
- *ssid = val;
- if (!isxdigit((*str)[0])) /* We require at least one hex digit */
- goto confused;
- val = simple_strtoul(*str, str, 16);
- if (val < 0 || val > 0xffff)
- goto confused;
- *devno = val;
- if ((*str)[0] != ',' && (*str)[0] != '-' &&
- (*str)[0] != '\n' && (*str)[0] != '\0')
- goto confused;
+
+ if ((diff < min_digit) || (diff > max_digit) || (*val > max_val))
+ return 1;
+
return 0;
-confused:
- strsep(str, ",\n");
- printk(KERN_WARNING "cio: Invalid cio_ignore parameter '%s'\n", sav);
- return 1;
}
-static int
-blacklist_parse_parameters (char *str, range_action action)
+static int parse_busid(char *str, int *cssid, int *ssid, int *devno,
+ int msgtrigger)
{
- int from, to, from_id0, to_id0, from_ssid, to_ssid;
-
- while (*str != 0 && *str != '\n') {
- range_action ra = action;
- while(*str == ',')
- str++;
- if (*str == '!') {
- ra = !action;
- ++str;
+ char *str_work;
+ int val, rc, ret;
+
+ rc = 1;
+
+ if (*str == '\0')
+ goto out;
+
+ /* old style */
+ str_work = str;
+ val = simple_strtoul(str, &str_work, 16);
+
+ if (*str_work == '\0') {
+ if (val <= __MAX_SUBCHANNEL) {
+ *devno = val;
+ *ssid = 0;
+ *cssid = 0;
+ rc = 0;
}
+ goto out;
+ }
- /*
- * Since we have to parse the proc commands and the
- * kernel arguments we have to check four cases
- */
- if (strncmp(str,"all,",4) == 0 || strcmp(str,"all") == 0 ||
- strncmp(str,"all\n",4) == 0 || strncmp(str,"all ",4) == 0) {
- int j;
-
- str += 3;
- for (j=0; j <= __MAX_SSID; j++)
- blacklist_range(ra, 0, __MAX_SUBCHANNEL, j);
- } else {
- int rc;
+ /* new style */
+ str_work = str;
+ ret = pure_hex(&str_work, cssid, 1, 2, __MAX_CSSID);
+ if (ret || (str_work[0] != '.'))
+ goto out;
+ str_work++;
+ ret = pure_hex(&str_work, ssid, 1, 1, __MAX_SSID);
+ if (ret || (str_work[0] != '.'))
+ goto out;
+ str_work++;
+ ret = pure_hex(&str_work, devno, 4, 4, __MAX_SUBCHANNEL);
+ if (ret || (str_work[0] != '\0'))
+ goto out;
+
+ rc = 0;
+out:
+ if (rc && msgtrigger)
+ printk(KERN_WARNING "cio: Invalid cio_ignore device '%s'\n",
+ str);
+
+ return rc;
+}
- rc = blacklist_busid(&str, &from_id0,
- &from_ssid, &from);
- if (rc)
- continue;
- to = from;
- to_id0 = from_id0;
- to_ssid = from_ssid;
- if (*str == '-') {
- str++;
- rc = blacklist_busid(&str, &to_id0,
- &to_ssid, &to);
- if (rc)
- continue;
- }
- if (*str == '-') {
- printk(KERN_WARNING "cio: invalid cio_ignore "
- "parameter '%s'\n",
- strsep(&str, ",\n"));
- continue;
- }
- if ((from_id0 != to_id0) ||
- (from_ssid != to_ssid)) {
- printk(KERN_WARNING "cio: invalid cio_ignore "
- "range %x.%x.%04x-%x.%x.%04x\n",
- from_id0, from_ssid, from,
- to_id0, to_ssid, to);
- continue;
+static int blacklist_parse_parameters(char *str, range_action action,
+ int msgtrigger)
+{
+ int from_cssid, to_cssid, from_ssid, to_ssid, from, to;
+ int rc, totalrc;
+ char *parm;
+ range_action ra;
+
+ totalrc = 0;
+
+ while ((parm = strsep(&str, ","))) {
+ rc = 0;
+ ra = action;
+ if (*parm == '!') {
+ if (ra == add)
+ ra = free;
+ else
+ ra = add;
+ parm++;
+ }
+ if (strcmp(parm, "all") == 0) {
+ from_cssid = 0;
+ from_ssid = 0;
+ from = 0;
+ to_cssid = __MAX_CSSID;
+ to_ssid = __MAX_SSID;
+ to = __MAX_SUBCHANNEL;
+ } else {
+ rc = parse_busid(strsep(&parm, "-"), &from_cssid,
+ &from_ssid, &from, msgtrigger);
+ if (!rc) {
+ if (parm != NULL)
+ rc = parse_busid(parm, &to_cssid,
+ &to_ssid, &to,
+ msgtrigger);
+ else {
+ to_cssid = from_cssid;
+ to_ssid = from_ssid;
+ to = from;
+ }
}
- blacklist_range (ra, from, to, to_ssid);
}
+ if (!rc) {
+ rc = blacklist_range(ra, from_ssid, to_ssid, from, to,
+ msgtrigger);
+ if (rc)
+ totalrc = 1;
+ } else
+ totalrc = 1;
}
- return 1;
+
+ return totalrc;
}
-/* Parsing the commandline for blacklist parameters, e.g. to blacklist
- * bus ids 0.0.1234, 0.0.1235 and 0.0.1236, you could use any of:
- * - cio_ignore=1234-1236
- * - cio_ignore=0x1234-0x1235,1236
- * - cio_ignore=0x1234,1235-1236
- * - cio_ignore=1236 cio_ignore=1234-0x1236
- * - cio_ignore=1234 cio_ignore=1236 cio_ignore=0x1235
- * - cio_ignore=0.0.1234-0.0.1236
- * - cio_ignore=0.0.1234,0x1235,1236
- * - ...
- */
static int __init
blacklist_setup (char *str)
{
CIO_MSG_EVENT(6, "Reading blacklist parameters\n");
- return blacklist_parse_parameters (str, add);
+ if (blacklist_parse_parameters(str, add, 1))
+ return 0;
+ return 1;
}
__setup ("cio_ignore=", blacklist_setup);
* Function: blacklist_parse_proc_parameters
* parse the stuff which is piped to /proc/cio_ignore
*/
-static void
-blacklist_parse_proc_parameters (char *buf)
+static int blacklist_parse_proc_parameters(char *buf)
{
- if (strncmp (buf, "free ", 5) == 0) {
- blacklist_parse_parameters (buf + 5, free);
- } else if (strncmp (buf, "add ", 4) == 0) {
- /*
- * We don't need to check for known devices since
- * css_probe_device will handle this correctly.
- */
- blacklist_parse_parameters (buf + 4, add);
- } else {
- printk (KERN_WARNING "cio: cio_ignore: Parse error; \n"
- KERN_WARNING "try using 'free all|<devno-range>,"
- "<devno-range>,...'\n"
- KERN_WARNING "or 'add <devno-range>,"
- "<devno-range>,...'\n");
- return;
- }
+ int rc;
+ char *parm;
+
+ parm = strsep(&buf, " ");
+
+ if (strcmp("free", parm) == 0)
+ rc = blacklist_parse_parameters(buf, free, 0);
+ else if (strcmp("add", parm) == 0)
+ rc = blacklist_parse_parameters(buf, add, 0);
+ else
+ return 1;
css_schedule_reprobe();
+
+ return rc;
}
/* Iterator struct for all devices. */
size_t user_len, loff_t *offset)
{
char *buf;
+ size_t i;
+ ssize_t rc, ret;
if (*offset)
return -EINVAL;
buf = vmalloc (user_len + 1); /* maybe better use the stack? */
if (buf == NULL)
return -ENOMEM;
+ memset(buf, 0, user_len + 1);
+
if (strncpy_from_user (buf, user_buf, user_len) < 0) {
- vfree (buf);
- return -EFAULT;
+ rc = -EFAULT;
+ goto out_free;
}
- buf[user_len] = '\0';
- blacklist_parse_proc_parameters (buf);
+ i = user_len - 1;
+ while ((i >= 0) && (isspace(buf[i]) || (buf[i] == 0))) {
+ buf[i] = '\0';
+ i--;
+ }
+ ret = blacklist_parse_proc_parameters(buf);
+ if (ret)
+ rc = -EINVAL;
+ else
+ rc = user_len;
+out_free:
vfree (buf);
- return user_len;
+ return rc;
}
static const struct seq_operations cio_ignore_proc_seq_ops = {
debug_info_t *cio_debug_trace_id;
debug_info_t *cio_debug_crw_id;
-int cio_show_msg;
-
-static int __init
-cio_setup (char *parm)
-{
- if (!strcmp (parm, "yes"))
- cio_show_msg = 1;
- else if (!strcmp (parm, "no"))
- cio_show_msg = 0;
- else
- printk(KERN_ERR "cio: cio_setup: "
- "invalid cio_msg parameter '%s'", parm);
- return 1;
-}
-
-__setup ("cio_msg=", cio_setup);
-
/*
* Function: cio_debug_init
* Initializes three debug logs for common I/O:
stsch (sch->schid, &sch->schib);
- CIO_MSG_EVENT(0, "cio_start: 'not oper' status for "
+ CIO_MSG_EVENT(2, "cio_start: 'not oper' status for "
"subchannel 0.%x.%04x!\n", sch->schid.ssid,
sch->schid.sch_no);
sprintf(dbf_text, "no%s", sch->dev.bus_id);
* ... just being curious we check for non I/O subchannels
*/
if (sch->st != 0) {
- CIO_DEBUG(KERN_INFO, 0,
- "Subchannel 0.%x.%04x reports "
- "non-I/O subchannel type %04X\n",
- sch->schid.ssid, sch->schid.sch_no, sch->st);
+ CIO_MSG_EVENT(4, "Subchannel 0.%x.%04x reports "
+ "non-I/O subchannel type %04X\n",
+ sch->schid.ssid, sch->schid.sch_no, sch->st);
/* We stop here for non-io subchannels. */
err = sch->st;
goto out;
* This device must not be known to Linux. So we simply
* say that there is no device and return ENODEV.
*/
- CIO_MSG_EVENT(4, "Blacklisted device detected "
+ CIO_MSG_EVENT(6, "Blacklisted device detected "
"at devno %04X, subchannel set %x\n",
sch->schib.pmcw.dev, sch->schid.ssid);
err = -ENODEV;
sch->lpm = sch->schib.pmcw.pam & sch->opm;
sch->isc = 3;
- CIO_DEBUG(KERN_INFO, 0,
- "Detected device %04x on subchannel 0.%x.%04X"
- " - PIM = %02X, PAM = %02X, POM = %02X\n",
- sch->schib.pmcw.dev, sch->schid.ssid,
- sch->schid.sch_no, sch->schib.pmcw.pim,
- sch->schib.pmcw.pam, sch->schib.pmcw.pom);
+ CIO_MSG_EVENT(6, "Detected device %04x on subchannel 0.%x.%04X "
+ "- PIM = %02X, PAM = %02X, POM = %02X\n",
+ sch->schib.pmcw.dev, sch->schid.ssid,
+ sch->schid.sch_no, sch->schib.pmcw.pim,
+ sch->schib.pmcw.pam, sch->schib.pmcw.pom);
/*
* We now have to initially ...
#define cio_get_console_priv() NULL
#endif
-extern int cio_show_msg;
-
#endif
}
}
-#define CIO_DEBUG(printk_level, event_level, msg...) do { \
- if (cio_show_msg) \
- printk(printk_level "cio: " msg); \
- CIO_MSG_EVENT(event_level, msg); \
- } while (0)
-
#endif
{
int ret;
- CIO_MSG_EVENT(2, "reprobe start\n");
+ CIO_MSG_EVENT(4, "reprobe start\n");
need_reprobe = 0;
/* Make sure initial subchannel scan is done. */
atomic_read(&ccw_device_init_count) == 0);
ret = for_each_subchannel_staged(NULL, reprobe_subchannel, NULL);
- CIO_MSG_EVENT(2, "reprobe done (rc=%d, need_reprobe=%d)\n", ret,
+ CIO_MSG_EVENT(4, "reprobe done (rc=%d, need_reprobe=%d)\n", ret,
need_reprobe);
}
rc = device_schedule_callback(&cdev->dev,
ccw_device_remove_orphan_cb);
if (rc)
- CIO_MSG_EVENT(2, "Couldn't unregister orphan "
+ CIO_MSG_EVENT(0, "Couldn't unregister orphan "
"0.%x.%04x\n",
cdev->private->dev_id.ssid,
cdev->private->dev_id.devno);
rc = device_schedule_callback(cdev->dev.parent,
ccw_device_remove_sch_cb);
if (rc)
- CIO_MSG_EVENT(2, "Couldn't unregister disconnected device "
+ CIO_MSG_EVENT(0, "Couldn't unregister disconnected device "
"0.%x.%04x\n",
cdev->private->dev_id.ssid,
cdev->private->dev_id.devno);
if (ret == 0)
wait_event(cdev->private->wait_q, dev_fsm_final_state(cdev));
else {
- CIO_MSG_EVENT(2, "ccw_device_offline returned %d, "
+ CIO_MSG_EVENT(0, "ccw_device_offline returned %d, "
"device 0.%x.%04x\n",
ret, cdev->private->dev_id.ssid,
cdev->private->dev_id.devno);
if (ret == 0)
wait_event(cdev->private->wait_q, dev_fsm_final_state(cdev));
else {
- CIO_MSG_EVENT(2, "ccw_device_online returned %d, "
+ CIO_MSG_EVENT(0, "ccw_device_online returned %d, "
"device 0.%x.%04x\n",
ret, cdev->private->dev_id.ssid,
cdev->private->dev_id.devno);
if (ret == 0)
wait_event(cdev->private->wait_q, dev_fsm_final_state(cdev));
else
- CIO_MSG_EVENT(2, "ccw_device_offline returned %d, "
+ CIO_MSG_EVENT(0, "ccw_device_offline returned %d, "
"device 0.%x.%04x\n",
ret, cdev->private->dev_id.ssid,
cdev->private->dev_id.devno);
other_sch = to_subchannel(get_device(cdev->dev.parent));
ret = device_move(&cdev->dev, &sch->dev);
if (ret) {
- CIO_MSG_EVENT(2, "Moving disconnected device 0.%x.%04x failed "
+ CIO_MSG_EVENT(0, "Moving disconnected device 0.%x.%04x failed "
"(ret=%d)!\n", cdev->private->dev_id.ssid,
cdev->private->dev_id.devno, ret);
put_device(&other_sch->dev);
ret = device_reprobe(&cdev->dev);
if (ret)
/* We can't do much here. */
- CIO_MSG_EVENT(2, "device_reprobe() returned"
+ CIO_MSG_EVENT(0, "device_reprobe() returned"
" %d for 0.%x.%04x\n", ret,
cdev->private->dev_id.ssid,
cdev->private->dev_id.devno);
rc = device_move(&cdev->dev, &sch->dev);
mutex_unlock(&sch->reg_mutex);
if (rc) {
- CIO_MSG_EVENT(2, "Moving device 0.%x.%04x to subchannel "
+ CIO_MSG_EVENT(0, "Moving device 0.%x.%04x to subchannel "
"0.%x.%04x failed (ret=%d)!\n",
cdev->private->dev_id.ssid,
cdev->private->dev_id.devno, sch->schid.ssid,
wait_event(cdev->private->wait_q,
dev_fsm_final_state(cdev));
else
- //FIXME: we can't fail!
- CIO_MSG_EVENT(2, "ccw_device_offline returned %d, "
+ CIO_MSG_EVENT(0, "ccw_device_offline returned %d, "
"device 0.%x.%04x\n",
ret, cdev->private->dev_id.ssid,
cdev->private->dev_id.devno);
spin_lock_irq(cdev->ccwlock);
switch (cdev->private->state) {
case DEV_STATE_DISCONNECTED:
- CIO_MSG_EVENT(3, "recovery: trigger 0.%x.%04x\n",
+ CIO_MSG_EVENT(4, "recovery: trigger 0.%x.%04x\n",
cdev->private->dev_id.ssid,
cdev->private->dev_id.devno);
dev_fsm_event(cdev, DEV_EVENT_VERIFY);
}
spin_unlock_irq(&recovery_lock);
} else
- CIO_MSG_EVENT(2, "recovery: end\n");
+ CIO_MSG_EVENT(4, "recovery: end\n");
}
static DECLARE_WORK(recovery_work, recovery_work_func);
{
unsigned long flags;
- CIO_MSG_EVENT(2, "recovery: schedule\n");
+ CIO_MSG_EVENT(4, "recovery: schedule\n");
spin_lock_irqsave(&recovery_lock, flags);
if (!timer_pending(&recovery_timer) || (recovery_phase != 0)) {
recovery_phase = 0;
same_dev = 0; /* Keep the compiler quiet... */
switch (state) {
case DEV_STATE_NOT_OPER:
- CIO_DEBUG(KERN_WARNING, 2,
- "SenseID : unknown device %04x on subchannel "
- "0.%x.%04x\n", cdev->private->dev_id.devno,
- sch->schid.ssid, sch->schid.sch_no);
+ CIO_MSG_EVENT(2, "SenseID : unknown device %04x on "
+ "subchannel 0.%x.%04x\n",
+ cdev->private->dev_id.devno,
+ sch->schid.ssid, sch->schid.sch_no);
break;
case DEV_STATE_OFFLINE:
if (cdev->private->state == DEV_STATE_DISCONNECTED_SENSE_ID) {
return;
}
/* Issue device info message. */
- CIO_DEBUG(KERN_INFO, 2,
- "SenseID : device 0.%x.%04x reports: "
- "CU Type/Mod = %04X/%02X, Dev Type/Mod = "
- "%04X/%02X\n",
- cdev->private->dev_id.ssid,
- cdev->private->dev_id.devno,
- cdev->id.cu_type, cdev->id.cu_model,
- cdev->id.dev_type, cdev->id.dev_model);
+ CIO_MSG_EVENT(4, "SenseID : device 0.%x.%04x reports: "
+ "CU Type/Mod = %04X/%02X, Dev Type/Mod = "
+ "%04X/%02X\n",
+ cdev->private->dev_id.ssid,
+ cdev->private->dev_id.devno,
+ cdev->id.cu_type, cdev->id.cu_model,
+ cdev->id.dev_type, cdev->id.dev_model);
break;
case DEV_STATE_BOXED:
- CIO_DEBUG(KERN_WARNING, 2,
- "SenseID : boxed device %04x on subchannel "
- "0.%x.%04x\n", cdev->private->dev_id.devno,
- sch->schid.ssid, sch->schid.sch_no);
+ CIO_MSG_EVENT(0, "SenseID : boxed device %04x on "
+ " subchannel 0.%x.%04x\n",
+ cdev->private->dev_id.devno,
+ sch->schid.ssid, sch->schid.sch_no);
break;
}
cdev->private->state = state;
if (state == DEV_STATE_BOXED)
- CIO_DEBUG(KERN_WARNING, 2,
- "Boxed device %04x on subchannel %04x\n",
- cdev->private->dev_id.devno, sch->schid.sch_no);
+ CIO_MSG_EVENT(0, "Boxed device %04x on subchannel %04x\n",
+ cdev->private->dev_id.devno, sch->schid.sch_no);
if (cdev->private->flags.donotify) {
cdev->private->flags.donotify = 0;
/* Basic sense hasn't started. Try again. */
ccw_device_do_sense(cdev, irb);
else {
- CIO_MSG_EVENT(2, "Huh? 0.%x.%04x: unsolicited "
+ CIO_MSG_EVENT(0, "0.%x.%04x: unsolicited "
"interrupt during w4sense...\n",
cdev->private->dev_id.ssid,
cdev->private->dev_id.devno);
static void
ccw_device_bug(struct ccw_device *cdev, enum dev_event dev_event)
{
- CIO_MSG_EVENT(0, "dev_jumptable[%i][%i] == NULL\n",
- cdev->private->state, dev_event);
+ CIO_MSG_EVENT(0, "Internal state [%i][%i] not handled for device "
+ "0.%x.%04x\n", cdev->private->state, dev_event,
+ cdev->private->dev_id.ssid,
+ cdev->private->dev_id.devno);
BUG();
}
* sense id information. So, for intervention required,
* we use the "whack it until it talks" strategy...
*/
- CIO_MSG_EVENT(2, "SenseID : device %04x on Subchannel "
+ CIO_MSG_EVENT(0, "SenseID : device %04x on Subchannel "
"0.%x.%04x reports cmd reject\n",
cdev->private->dev_id.devno, sch->schid.ssid,
sch->schid.sch_no);
lpm = to_io_private(sch)->orb.lpm;
if ((lpm & sch->schib.pmcw.pim & sch->schib.pmcw.pam) != 0)
- CIO_MSG_EVENT(2, "SenseID : path %02X for device %04x "
+ CIO_MSG_EVENT(4, "SenseID : path %02X for device %04x "
"on subchannel 0.%x.%04x is "
"'not operational'\n", lpm,
cdev->private->dev_id.devno,
/* ret is 0, -EBUSY, -EACCES or -ENODEV */
if (ret != -EACCES)
return ret;
- CIO_MSG_EVENT(2, "SNID - Device %04x on Subchannel "
+ CIO_MSG_EVENT(3, "SNID - Device %04x on Subchannel "
"0.%x.%04x, lpm %02X, became 'not "
"operational'\n",
cdev->private->dev_id.devno,
u8 lpm;
lpm = to_io_private(sch)->orb.lpm;
- CIO_MSG_EVENT(2, "SNID - Device %04x on Subchannel 0.%x.%04x,"
+ CIO_MSG_EVENT(3, "SNID - Device %04x on Subchannel 0.%x.%04x,"
" lpm %02X, became 'not operational'\n",
cdev->private->dev_id.devno, sch->schid.ssid,
sch->schid.sch_no, lpm);
return ret;
}
/* PGID command failed on this path. */
- CIO_MSG_EVENT(2, "SPID - Device %04x on Subchannel "
+ CIO_MSG_EVENT(3, "SPID - Device %04x on Subchannel "
"0.%x.%04x, lpm %02X, became 'not operational'\n",
cdev->private->dev_id.devno, sch->schid.ssid,
sch->schid.sch_no, cdev->private->imask);
return ret;
}
/* nop command failed on this path. */
- CIO_MSG_EVENT(2, "NOP - Device %04x on Subchannel "
+ CIO_MSG_EVENT(3, "NOP - Device %04x on Subchannel "
"0.%x.%04x, lpm %02X, became 'not operational'\n",
cdev->private->dev_id.devno, sch->schid.ssid,
sch->schid.sch_no, cdev->private->imask);
return -EAGAIN;
}
if (irb->scsw.cc == 3) {
- CIO_MSG_EVENT(2, "SPID - Device %04x on Subchannel 0.%x.%04x,"
+ CIO_MSG_EVENT(3, "SPID - Device %04x on Subchannel 0.%x.%04x,"
" lpm %02X, became 'not operational'\n",
cdev->private->dev_id.devno, sch->schid.ssid,
sch->schid.sch_no, cdev->private->imask);
return -ETIME;
}
if (irb->scsw.cc == 3) {
- CIO_MSG_EVENT(2, "NOP - Device %04x on Subchannel 0.%x.%04x,"
+ CIO_MSG_EVENT(3, "NOP - Device %04x on Subchannel 0.%x.%04x,"
" lpm %02X, became 'not operational'\n",
cdev->private->dev_id.devno, sch->schid.ssid,
sch->schid.sch_no, cdev->private->imask);
int ccode;
struct semaphore *sem;
unsigned int chain;
+ int ignore;
sem = (struct semaphore *)param;
repeat:
- down_interruptible(sem);
+ ignore = down_interruptible(sem);
chain = 0;
while (1) {
if (unlikely(chain > 1)) {
return rcode;
}
-
-/*
- * This routine returns information about the system. This does not effect
- * any logic and if the info is wrong - it doesn't matter.
- */
-
-/* Get all the info we can not get from kernel services */
-static int adpt_system_info(void __user *buffer)
-{
- sysInfo_S si;
-
- memset(&si, 0, sizeof(si));
-
- si.osType = OS_LINUX;
- si.osMajorVersion = 0;
- si.osMinorVersion = 0;
- si.osRevision = 0;
- si.busType = SI_PCI_BUS;
- si.processorFamily = DPTI_sig.dsProcessorFamily;
-
-#if defined __i386__
- adpt_i386_info(&si);
-#elif defined (__ia64__)
- adpt_ia64_info(&si);
-#elif defined(__sparc__)
- adpt_sparc_info(&si);
-#elif defined (__alpha__)
- adpt_alpha_info(&si);
-#else
- si.processorType = 0xff ;
-#endif
- if(copy_to_user(buffer, &si, sizeof(si))){
- printk(KERN_WARNING"dpti: Could not copy buffer TO user\n");
- return -EFAULT;
- }
-
- return 0;
-}
-
#if defined __ia64__
static void adpt_ia64_info(sysInfo_S* si)
{
}
#endif
-
#if defined __sparc__
static void adpt_sparc_info(sysInfo_S* si)
{
si->processorType = PROC_ULTRASPARC;
}
#endif
-
#if defined __alpha__
static void adpt_alpha_info(sysInfo_S* si)
{
#endif
#if defined __i386__
-
static void adpt_i386_info(sysInfo_S* si)
{
// This is all the info we need for now
break;
}
}
+#endif
+
+/*
+ * This routine returns information about the system. This does not effect
+ * any logic and if the info is wrong - it doesn't matter.
+ */
+/* Get all the info we can not get from kernel services */
+static int adpt_system_info(void __user *buffer)
+{
+ sysInfo_S si;
+
+ memset(&si, 0, sizeof(si));
+
+ si.osType = OS_LINUX;
+ si.osMajorVersion = 0;
+ si.osMinorVersion = 0;
+ si.osRevision = 0;
+ si.busType = SI_PCI_BUS;
+ si.processorFamily = DPTI_sig.dsProcessorFamily;
+
+#if defined __i386__
+ adpt_i386_info(&si);
+#elif defined (__ia64__)
+ adpt_ia64_info(&si);
+#elif defined(__sparc__)
+ adpt_sparc_info(&si);
+#elif defined (__alpha__)
+ adpt_alpha_info(&si);
+#else
+ si.processorType = 0xff ;
#endif
+ if (copy_to_user(buffer, &si, sizeof(si))){
+ printk(KERN_WARNING"dpti: Could not copy buffer TO user\n");
+ return -EFAULT;
+ }
+ return 0;
+}
static int adpt_ioctl(struct inode *inode, struct file *file, uint cmd,
ulong arg)
static void adpt_delay(int millisec);
#endif
-#if defined __ia64__
-static void adpt_ia64_info(sysInfo_S* si);
-#endif
-#if defined __sparc__
-static void adpt_sparc_info(sysInfo_S* si);
-#endif
-#if defined __alpha__
-static void adpt_sparc_info(sysInfo_S* si);
-#endif
-#if defined __i386__
-static void adpt_i386_info(sysInfo_S* si);
-#endif
-
#define PRINT_BUFFER_SIZE 512
#define HBA_FLAGS_DBG_FLAGS_MASK 0xffff0000 // Mask for debug flags
{
struct bfin_serial_port *uart = (struct bfin_serial_port *)port;
struct circ_buf *xmit = &uart->port.info->xmit;
-#if !defined(CONFIG_BF54x) && !defined(CONFIG_SERIAL_BFIN_DMA)
- unsigned short ier;
-#endif
while (!(UART_GET_LSR(uart) & TEMT))
cpu_relax();
#ifdef CONFIG_BF54x
/* Clear TFI bit */
UART_PUT_LSR(uart, TFI);
- UART_CLEAR_IER(uart, ETBEI);
-#else
- ier = UART_GET_IER(uart);
- ier &= ~ETBEI;
- UART_PUT_IER(uart, ier);
#endif
+ UART_CLEAR_IER(uart, ETBEI);
#endif
}
if (uart->tx_done)
bfin_serial_dma_tx_chars(uart);
#else
-#ifdef CONFIG_BF54x
UART_SET_IER(uart, ETBEI);
-#else
- unsigned short ier;
- ier = UART_GET_IER(uart);
- ier |= ETBEI;
- UART_PUT_IER(uart, ier);
-#endif
bfin_serial_tx_chars(uart);
#endif
}
static void bfin_serial_stop_rx(struct uart_port *port)
{
struct bfin_serial_port *uart = (struct bfin_serial_port *)port;
-#ifdef CONFIG_KGDB_UART
- if (uart->port.line != CONFIG_KGDB_UART_PORT) {
+#ifdef CONFIG_KGDB_UART
+ if (uart->port.line != CONFIG_KGDB_UART_PORT)
#endif
-#ifdef CONFIG_BF54x
UART_CLEAR_IER(uart, ERBFI);
-#else
- unsigned short ier;
-
- ier = UART_GET_IER(uart);
- ier &= ~ERBFI;
- UART_PUT_IER(uart, ier);
-#endif
-#ifdef CONFIG_KGDB_UART
- }
-#endif
}
/*
SSYNC();
}
-#ifndef CONFIG_BF54x
- UART_PUT_LCR(uart, UART_GET_LCR(uart)&(~DLAB));
- SSYNC();
-#endif
+ UART_CLEAR_DLAB(uart);
UART_PUT_CHAR(uart, (unsigned char)chr);
SSYNC();
}
while(!(UART_GET_LSR(uart) & DR)) {
SSYNC();
}
-#ifndef CONFIG_BF54x
- UART_PUT_LCR(uart, UART_GET_LCR(uart)&(~DLAB));
- SSYNC();
-#endif
+ UART_CLEAR_DLAB(uart);
chr = UART_GET_CHAR(uart);
SSYNC();
struct tty_struct *tty = uart->port.info->tty;
unsigned int status, ch, flg;
static struct timeval anomaly_start = { .tv_sec = 0 };
-#ifdef CONFIG_KGDB_UART
- struct pt_regs *regs = get_irq_regs();
-#endif
status = UART_GET_LSR(uart);
UART_CLEAR_LSR(uart);
#ifdef CONFIG_KGDB_UART
if (uart->port.line == CONFIG_KGDB_UART_PORT) {
+ struct pt_regs *regs = get_irq_regs();
if (uart->port.cons->index == CONFIG_KGDB_UART_PORT && ch == 0x1) { /* Ctrl + A */
kgdb_breakkey_pressed(regs);
return;
static void bfin_serial_dma_tx_chars(struct bfin_serial_port *uart)
{
struct circ_buf *xmit = &uart->port.info->xmit;
- unsigned short ier;
uart->tx_done = 0;
set_dma_x_modify(uart->tx_dma_channel, 1);
enable_dma(uart->tx_dma_channel);
-#ifdef CONFIG_BF54x
UART_SET_IER(uart, ETBEI);
-#else
- ier = UART_GET_IER(uart);
- ier |= ETBEI;
- UART_PUT_IER(uart, ier);
-#endif
}
static void bfin_serial_dma_rx_chars(struct bfin_serial_port *uart)
{
struct bfin_serial_port *uart = dev_id;
struct circ_buf *xmit = &uart->port.info->xmit;
- unsigned short ier;
spin_lock(&uart->port.lock);
if (!(get_dma_curr_irqstat(uart->tx_dma_channel)&DMA_RUN)) {
disable_dma(uart->tx_dma_channel);
clear_dma_irqstat(uart->tx_dma_channel);
-#ifdef CONFIG_BF54x
UART_CLEAR_IER(uart, ETBEI);
-#else
- ier = UART_GET_IER(uart);
- ier &= ~ETBEI;
- UART_PUT_IER(uart, ier);
-#endif
xmit->tail = (xmit->tail + uart->tx_count) & (UART_XMIT_SIZE - 1);
uart->port.icount.tx += uart->tx_count;
# endif
}
-
if (request_irq
(uart->port.irq+1, bfin_serial_tx_int, IRQF_DISABLED,
"BFIN_UART_TX", uart)) {
return -EBUSY;
}
#endif
-#ifdef CONFIG_BF54x
UART_SET_IER(uart, ERBFI);
-#else
- UART_PUT_IER(uart, UART_GET_IER(uart) | ERBFI);
-#endif
return 0;
}
UART_PUT_IER(uart, 0);
#endif
-#ifndef CONFIG_BF54x
/* Set DLAB in LCR to Access DLL and DLH */
- val = UART_GET_LCR(uart);
- val |= DLAB;
- UART_PUT_LCR(uart, val);
- SSYNC();
-#endif
+ UART_SET_DLAB(uart);
UART_PUT_DLL(uart, quot & 0xFF);
- SSYNC();
UART_PUT_DLH(uart, (quot >> 8) & 0xFF);
SSYNC();
-#ifndef CONFIG_BF54x
/* Clear DLAB in LCR to Access THR RBR IER */
- val = UART_GET_LCR(uart);
- val &= ~DLAB;
- UART_PUT_LCR(uart, val);
- SSYNC();
-#endif
+ UART_CLEAR_DLAB(uart);
UART_PUT_LCR(uart, lcr);
status = UART_GET_IER(uart) & (ERBFI | ETBEI);
if (status == (ERBFI | ETBEI)) {
/* ok, the port was enabled */
- unsigned short lcr, val;
- unsigned short dlh, dll;
+ u16 lcr, dlh, dll;
lcr = UART_GET_LCR(uart);
case 2: *bits = 7; break;
case 3: *bits = 8; break;
}
-#ifndef CONFIG_BF54x
/* Set DLAB in LCR to Access DLL and DLH */
- val = UART_GET_LCR(uart);
- val |= DLAB;
- UART_PUT_LCR(uart, val);
-#endif
+ UART_SET_DLAB(uart);
dll = UART_GET_DLL(uart);
dlh = UART_GET_DLH(uart);
-#ifndef CONFIG_BF54x
/* Clear DLAB in LCR to Access THR RBR IER */
- val = UART_GET_LCR(uart);
- val &= ~DLAB;
- UART_PUT_LCR(uart, val);
-#endif
+ UART_CLEAR_DLAB(uart);
*baud = get_sclk() / (16*(dll | dlh << 8));
}
request_irq(uart->port.irq, bfin_serial_rx_int,
IRQF_DISABLED, "BFIN_UART_RX", uart);
pr_info("Request irq for kgdb uart port\n");
-#ifdef CONFIG_BF54x
UART_SET_IER(uart, ERBFI);
-#else
- UART_PUT_IER(uart, UART_GET_IER(uart) | ERBFI);
-#endif
SSYNC();
t.c_cflag = CS8|B57600;
t.c_iflag = 0;
static void uart_flush_buffer(struct tty_struct *tty)
{
struct uart_state *state = tty->driver_data;
- struct uart_port *port = state->port;
+ struct uart_port *port;
unsigned long flags;
/*
return;
}
+ port = state->port;
pr_debug("uart_flush_buffer(%d) called\n", tty->index);
spin_lock_irqsave(&port->lock, flags);
#include <linux/console.h>
#include <linux/platform_device.h>
#include <linux/serial_sci.h>
-
-#ifdef CONFIG_CPU_FREQ
#include <linux/notifier.h>
#include <linux/cpufreq.h>
-#endif
-
-#if defined(CONFIG_SUPERH) && !defined(CONFIG_SUPERH64)
+#include <linux/clk.h>
#include <linux/ctype.h>
+
+#ifdef CONFIG_SUPERH
#include <asm/clock.h>
#include <asm/sh_bios.h>
#include <asm/kgdb.h>
struct timer_list break_timer;
int break_flag;
-#if defined(CONFIG_SUPERH) && !defined(CONFIG_SUPERH64)
+#ifdef CONFIG_SUPERH
/* Port clock */
struct clk *clk;
#endif
static void sci_init_pins_scif(struct uart_port *port, unsigned int cflag)
{
unsigned int fcr_val = 0;
+ unsigned short data;
- if (cflag & CRTSCTS) {
- fcr_val |= SCFCR_MCE;
-
- ctrl_outw(0x0000, PORT_PSCR);
- } else {
- unsigned short data;
-
- data = ctrl_inw(PORT_PSCR);
- data &= 0x033f;
- data |= 0x0400;
- ctrl_outw(data, PORT_PSCR);
+ if (port->mapbase == 0xffe00000) {
+ data = ctrl_inw(PSCR);
+ data &= ~0x03cf;
+ if (cflag & CRTSCTS)
+ fcr_val |= SCFCR_MCE;
+ else
+ data |= 0x0340;
- ctrl_outw(ctrl_inw(SCSPTR0) & 0x17, SCSPTR0);
+ ctrl_outw(data, PSCR);
}
+ /* SCIF1 and SCIF2 should be setup by board code */
sci_out(port, SCFCR, fcr_val);
}
# define SCSCR_INIT(port) 0x32 /* TIE=0,RIE=0,TE=1,RE=1,REIE=0,CKE=1 */
# define SCIF_ONLY
#elif defined(CONFIG_CPU_SUBTYPE_SH7722)
-# define SCPDR0 0xA405013E /* 16 bit SCIF0 PSDR */
-# define SCSPTR0 SCPDR0
+# define PADR 0xA4050120
+# define PSDR 0xA405013e
+# define PWDR 0xA4050166
+# define PSCR 0xA405011E
# define SCIF_ORER 0x0001 /* overrun error bit */
# define SCSCR_INIT(port) 0x0038 /* TIE=0,RIE=0,TE=1,RE=1,REIE=1 */
# define SCIF_ONLY
-# define PORT_PSCR 0xA405011E
#elif defined(CONFIG_CPU_SUBTYPE_SH7366)
# define SCPDR0 0xA405013E /* 16 bit SCIF0 PSDR */
# define SCSPTR0 SCPDR0
unsigned int addr = port->mapbase + (offset); \
if ((size) == 8) { \
ctrl_outb(value, addr); \
- } else { \
+ } else if ((size) == 16) { \
ctrl_outw(value, addr); \
}
SCIF_FNS(SCLSR, 0, 0, 0x28, 16)
#else
SCIF_FNS(SCFDR, 0x0e, 16, 0x1C, 16)
+#if defined(CONFIG_CPU_SUBTYPE_SH7722)
+SCIF_FNS(SCSPTR, 0, 0, 0, 0)
+#else
SCIF_FNS(SCSPTR, 0, 0, 0x20, 16)
+#endif
SCIF_FNS(SCLSR, 0, 0, 0x24, 16)
#endif
#endif
return ctrl_inw(SCSPTR3) & 0x0001 ? 1 : 0; /* SCIF */
return 1;
}
-#elif defined(CONFIG_CPU_SUBTYPE_SH7722) || defined(CONFIG_CPU_SUBTYPE_SH7366)
+#elif defined(CONFIG_CPU_SUBTYPE_SH7366)
static inline int sci_rxd_in(struct uart_port *port)
{
if (port->mapbase == 0xffe00000)
return ctrl_inb(SCPDR0) & 0x0001 ? 1 : 0; /* SCIF0 */
return 1;
}
+#elif defined(CONFIG_CPU_SUBTYPE_SH7722)
+static inline int sci_rxd_in(struct uart_port *port)
+{
+ if (port->mapbase == 0xffe00000)
+ return ctrl_inb(PSDR) & 0x02 ? 1 : 0; /* SCIF0 */
+ if (port->mapbase == 0xffe10000)
+ return ctrl_inb(PADR) & 0x40 ? 1 : 0; /* SCIF1 */
+ if (port->mapbase == 0xffe20000)
+ return ctrl_inb(PWDR) & 0x04 ? 1 : 0; /* SCIF2 */
+
+ return 1;
+}
#elif defined(CONFIG_CPU_SUBTYPE_SH7723)
static inline int sci_rxd_in(struct uart_port *port)
{
{ USB_DEVICE(0x10C4, 0x814A) }, /* West Mountain Radio RIGblaster P&P */
{ USB_DEVICE(0x10C4, 0x814B) }, /* West Mountain Radio RIGtalk */
{ USB_DEVICE(0x10C4, 0x815E) }, /* Helicomm IP-Link 1220-DVM */
+ { USB_DEVICE(0x10C4, 0x81A6) }, /* ThinkOptics WavIt */
{ USB_DEVICE(0x10C4, 0x81AC) }, /* MSD Dash Hawk */
{ USB_DEVICE(0x10C4, 0x81C8) }, /* Lipowsky Industrie Elektronik GmbH, Baby-JTAG */
{ USB_DEVICE(0x10C4, 0x81E2) }, /* Lipowsky Industrie Elektronik GmbH, Baby-LIN */
static int iuu_bulk_write(struct usb_serial_port *port)
{
struct iuu_private *priv = usb_get_serial_port_data(port);
- unsigned int flags;
+ unsigned long flags;
int result;
int i;
char *buf_ptr = port->write_urb->transfer_buffer;
{
struct usb_serial_port *port = urb->context;
struct iuu_private *priv = usb_get_serial_port_data(port);
- unsigned int flags;
+ unsigned long flags;
int status;
int error = 0;
int len = 0;
int count)
{
struct iuu_private *priv = usb_get_serial_port_data(port);
- unsigned int flags;
+ unsigned long flags;
dbg("%s - enter", __func__);
if (count > 256)
#include <linux/init.h>
#include <linux/fb.h>
#include <linux/mm.h>
+#include <linux/of_device.h>
#include <asm/io.h>
-#include <asm/oplib.h>
-#include <asm/prom.h>
-#include <asm/of_device.h>
#include <asm/fbio.h>
#include "sbuslib.h"
par->physbase = op->resource[0].start;
par->which_io = op->resource[0].flags & IORESOURCE_BITS;
- sbusfb_fill_var(&info->var, dp->node, 1);
+ sbusfb_fill_var(&info->var, dp, 1);
linebytes = of_getintprop_default(dp, "linebytes",
info->var.xres);
#include <linux/fb.h>
#include <linux/mm.h>
#include <linux/uaccess.h>
+#include <linux/of_device.h>
#include <asm/io.h>
-#include <asm/prom.h>
-#include <asm/of_device.h>
#include <asm/fbio.h>
#include "sbuslib.h"
spin_lock_init(&par->lock);
- sbusfb_fill_var(&info->var, dp->node, 8);
+ sbusfb_fill_var(&info->var, dp, 8);
info->var.red.length = 8;
info->var.green.length = 8;
info->var.blue.length = 8;
#include <linux/init.h>
#include <linux/fb.h>
#include <linux/mm.h>
+#include <linux/of_device.h>
#include <asm/io.h>
-#include <asm/oplib.h>
-#include <asm/prom.h>
-#include <asm/of_device.h>
#include <asm/fbio.h>
#include "sbuslib.h"
par->physbase = op->resource[0].start;
par->which_io = op->resource[0].flags & IORESOURCE_BITS;
- sbusfb_fill_var(&info->var, dp->node, 8);
+ sbusfb_fill_var(&info->var, dp, 8);
info->var.red.length = 8;
info->var.green.length = 8;
info->var.blue.length = 8;
#include <linux/init.h>
#include <linux/fb.h>
#include <linux/mm.h>
+#include <linux/of_device.h>
#include <asm/io.h>
-#include <asm/of_device.h>
#include <asm/fbio.h>
#include "sbuslib.h"
par->physbase = op->resource[0].start;
par->which_io = op->resource[0].flags & IORESOURCE_BITS;
- sbusfb_fill_var(&info->var, dp->node, 8);
+ sbusfb_fill_var(&info->var, dp, 8);
info->var.red.length = 8;
info->var.green.length = 8;
info->var.blue.length = 8;
#include <linux/fb.h>
#include <linux/mm.h>
#include <linux/timer.h>
+#include <linux/of_device.h>
#include <asm/io.h>
#include <asm/upa.h>
-#include <asm/prom.h>
-#include <asm/of_device.h>
#include <asm/fbio.h>
#include "sbuslib.h"
info->screen_base = (char *) par->physbase + FFB_DFB24_POFF;
info->pseudo_palette = par->pseudo_palette;
- sbusfb_fill_var(&info->var, dp->node, 32);
+ sbusfb_fill_var(&info->var, dp, 32);
par->fbsize = PAGE_ALIGN(info->var.xres * info->var.yres * 4);
ffb_fixup_var_rgb(&info->var);
#include <linux/init.h>
#include <linux/fb.h>
#include <linux/mm.h>
+#include <linux/of_device.h>
#include <asm/io.h>
-#include <asm/prom.h>
-#include <asm/of_device.h>
#include <asm/fbio.h>
#include "sbuslib.h"
par->physbase = op->resource[0].start;
par->which_io = op->resource[0].flags & IORESOURCE_BITS;
- sbusfb_fill_var(&info->var, dp->node, 32);
+ sbusfb_fill_var(&info->var, dp, 32);
leo_fixup_var_rgb(&info->var);
linebytes = of_getintprop_default(dp, "linebytes",
#include <linux/init.h>
#include <linux/fb.h>
#include <linux/mm.h>
+#include <linux/of_device.h>
#include <asm/io.h>
-#include <asm/prom.h>
-#include <asm/of_device.h>
#include <asm/fbio.h>
#include "sbuslib.h"
par->physbase = op->resource[2].start;
par->which_io = op->resource[2].flags & IORESOURCE_BITS;
- sbusfb_fill_var(&info->var, dp->node, 8);
+ sbusfb_fill_var(&info->var, dp, 8);
info->var.red.length = 8;
info->var.green.length = 8;
info->var.blue.length = 8;
}
}
-static int pxafb_decode_mach_info(struct pxafb_info *fbi,
- struct pxafb_mach_info *inf)
+static void pxafb_decode_mach_info(struct pxafb_info *fbi,
+ struct pxafb_mach_info *inf)
{
unsigned int lcd_conn = inf->lcd_conn;
fbi->lccr0 = inf->lccr0;
fbi->lccr3 = inf->lccr3;
fbi->lccr4 = inf->lccr4;
- return -EINVAL;
+ goto decode_mode;
}
if (lcd_conn == LCD_MONO_STN_8BPP)
fbi->lccr3 |= (lcd_conn & LCD_BIAS_ACTIVE_LOW) ? LCCR3_OEP : 0;
fbi->lccr3 |= (lcd_conn & LCD_PCLK_EDGE_FALL) ? LCCR3_PCP : 0;
+decode_mode:
pxafb_decode_mode_info(fbi, inf->modes, inf->num_modes);
- return 0;
}
static struct pxafb_info * __init pxafb_init_fbinfo(struct device *dev)
#include <linux/fb.h>
#include <linux/mm.h>
#include <linux/uaccess.h>
+#include <linux/of_device.h>
-#include <asm/oplib.h>
#include <asm/fbio.h>
#include "sbuslib.h"
-void sbusfb_fill_var(struct fb_var_screeninfo *var, int prom_node, int bpp)
+void sbusfb_fill_var(struct fb_var_screeninfo *var, struct device_node *dp,
+ int bpp)
{
memset(var, 0, sizeof(*var));
- var->xres = prom_getintdefault(prom_node, "width", 1152);
- var->yres = prom_getintdefault(prom_node, "height", 900);
+ var->xres = of_getintprop_default(dp, "width", 1152);
+ var->yres = of_getintprop_default(dp, "height", 900);
var->xres_virtual = var->xres;
var->yres_virtual = var->yres;
var->bits_per_pixel = bpp;
#define SBUS_MMAP_FBSIZE(n) (-n)
#define SBUS_MMAP_EMPTY 0x80000000
-extern void sbusfb_fill_var(struct fb_var_screeninfo *var, int prom_node, int bpp);
+extern void sbusfb_fill_var(struct fb_var_screeninfo *var,
+ struct device_node *dp, int bpp);
struct vm_area_struct;
extern int sbusfb_mmap_helper(struct sbus_mmap_map *map,
unsigned long physbase, unsigned long fbsize,
struct fb_info *info,
int type, int fb_depth, unsigned long fb_size);
int sbusfb_compat_ioctl(struct fb_info *info, unsigned int cmd,
- unsigned long arg);
+ unsigned long arg);
#endif /* _SBUSLIB_H */
#include <linux/fb.h>
#include <linux/pci.h>
#include <linux/init.h>
+#include <linux/of_device.h>
#include <asm/io.h>
-#include <asm/prom.h>
-#include <asm/of_device.h>
struct s3d_info {
struct fb_info *info;
#include <linux/fb.h>
#include <linux/pci.h>
#include <linux/init.h>
+#include <linux/of_device.h>
#include <asm/io.h>
-#include <asm/prom.h>
-#include <asm/of_device.h>
/* XXX This device has a 'dev-comm' property which aparently is
* XXX a pointer into the openfirmware's address space which is
#include <linux/init.h>
#include <linux/fb.h>
#include <linux/mm.h>
+#include <linux/of_device.h>
#include <asm/io.h>
-#include <asm/prom.h>
-#include <asm/of_device.h>
#include <asm/fbio.h>
#include "sbuslib.h"
par->lowdepth =
(of_find_property(dp, "tcx-8-bit", NULL) != NULL);
- sbusfb_fill_var(&info->var, dp->node, 8);
+ sbusfb_fill_var(&info->var, dp, 8);
info->var.red.length = 8;
info->var.green.length = 8;
info->var.blue.length = 8;
bio_init(bio);
if (likely(nr_iovecs)) {
- unsigned long idx = 0; /* shut up gcc */
+ unsigned long uninitialized_var(idx);
bvl = bvec_alloc_bs(gfp_mask, nr_iovecs, &idx, bs);
if (unlikely(!bvl)) {
* @data: pointer to buffer to copy
* @len: length in bytes
* @gfp_mask: allocation flags for bio and page allocation
+ * @reading: data direction is READ
*
* copy the kernel address into a bio suitable for io to a block
* device. Returns an error pointer in case of error.
+Version 1.53
+------------
+
Version 1.52
------------
Fix oops on second mount to server when null auth is used.
unsigned char *sequence_end;
unsigned long *oid = NULL;
unsigned int cls, con, tag, oidlen, rc;
- int use_ntlmssp = FALSE;
- int use_kerberos = FALSE;
+ bool use_ntlmssp = false;
+ bool use_kerberos = false;
*secType = NTLM; /* BB eventually make Kerberos or NLTMSSP the default*/
if (compare_oid(oid, oidlen,
MSKRB5_OID,
MSKRB5_OID_LEN))
- use_kerberos = TRUE;
+ use_kerberos = true;
else if (compare_oid(oid, oidlen,
KRB5_OID,
KRB5_OID_LEN))
- use_kerberos = TRUE;
+ use_kerberos = true;
else if (compare_oid(oid, oidlen,
NTLMSSP_OID,
NTLMSSP_OID_LEN))
- use_ntlmssp = TRUE;
+ use_ntlmssp = true;
kfree(oid);
}
/* find sharename end */
pSep++;
pSep = memchr(UNC+(pSep-UNC), '\\', len-(pSep-UNC));
- if (!pSep) {
- cERROR(1, ("%s:2 cant find share name in node name: %s",
- __func__, node_name));
- kfree(UNC);
- return NULL;
+ if (pSep) {
+ /* trim path up to sharename end
+ * now we have share name in UNC */
+ *pSep = 0;
}
- /* trim path up to sharename end
- * * now we have share name in UNC */
- *pSep = 0;
return UNC;
}
tkn_e = strchr(tkn_e+1, '\\');
if (tkn_e) {
strcat(mountdata, ",prefixpath=");
- strcat(mountdata, tkn_e);
+ strcat(mountdata, tkn_e+1);
}
}
return NULL;
if (cifs_sb->tcon->Flags & SMB_SHARE_IS_IN_DFS) {
- /* we should use full path name to correct working with DFS */
+ int i;
+ /* we should use full path name for correct working with DFS */
l_max_len = strnlen(cifs_sb->tcon->treeName, MAX_TREE_SIZE+1) +
strnlen(search_path, MAX_PATHCONF) + 1;
tmp_path = kmalloc(l_max_len, GFP_KERNEL);
return NULL;
}
strncpy(tmp_path, cifs_sb->tcon->treeName, l_max_len);
- strcat(tmp_path, search_path);
tmp_path[l_max_len-1] = 0;
+ if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_POSIX_PATHS)
+ for (i = 0; i < l_max_len; i++) {
+ if (tmp_path[i] == '\\')
+ tmp_path[i] = '/';
+ }
+ strncat(tmp_path, search_path, l_max_len - strlen(tmp_path));
+
full_path = tmp_path;
kfree(search_path);
} else {
const char *path, const __u16 *pfid)
{
struct cifsFileInfo *open_file = NULL;
- int unlock_file = FALSE;
+ bool unlock_file = false;
int xid;
int rc = -EIO;
__u16 fid;
cifs_sb = CIFS_SB(sb);
if (open_file) {
- unlock_file = TRUE;
+ unlock_file = true;
fid = open_file->netfid;
} else if (pfid == NULL) {
- int oplock = FALSE;
+ int oplock = 0;
/* open file */
rc = CIFSSMBOpen(xid, cifs_sb->tcon, path, FILE_OPEN,
READ_CONTROL, 0, &fid, &oplock, NULL,
rc = CIFSSMBGetCIFSACL(xid, cifs_sb->tcon, fid, &pntsd, pacllen);
cFYI(1, ("GetCIFSACL rc = %d ACL len %d", rc, *pacllen));
- if (unlock_file == TRUE) /* find_readable_file increments ref count */
+ if (unlock_file == true) /* find_readable_file increments ref count */
atomic_dec(&open_file->wrtPending);
else if (pfid == NULL) /* if opened above we have to close the handle */
CIFSSMBClose(xid, cifs_sb->tcon, fid);
struct inode *inode, const char *path)
{
struct cifsFileInfo *open_file;
- int unlock_file = FALSE;
+ bool unlock_file = false;
int xid;
int rc = -EIO;
__u16 fid;
open_file = find_readable_file(CIFS_I(inode));
if (open_file) {
- unlock_file = TRUE;
+ unlock_file = true;
fid = open_file->netfid;
} else {
- int oplock = FALSE;
+ int oplock = 0;
/* open file */
rc = CIFSSMBOpen(xid, cifs_sb->tcon, path, FILE_OPEN,
WRITE_DAC, 0, &fid, &oplock, NULL,
rc = CIFSSMBSetCIFSACL(xid, cifs_sb->tcon, fid, pnntsd, acllen);
cFYI(DBG2, ("SetCIFSACL rc = %d", rc));
- if (unlock_file == TRUE)
+ if (unlock_file)
atomic_dec(&open_file->wrtPending);
else
CIFSSMBClose(xid, cifs_sb->tcon, fid);
cifs_statfs(struct dentry *dentry, struct kstatfs *buf)
{
struct super_block *sb = dentry->d_sb;
- int xid;
+ struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
+ struct cifsTconInfo *tcon = cifs_sb->tcon;
int rc = -EOPNOTSUPP;
- struct cifs_sb_info *cifs_sb;
- struct cifsTconInfo *pTcon;
+ int xid;
xid = GetXid();
- cifs_sb = CIFS_SB(sb);
- pTcon = cifs_sb->tcon;
-
buf->f_type = CIFS_MAGIC_NUMBER;
- /* instead could get the real value via SMB_QUERY_FS_ATTRIBUTE_INFO */
- buf->f_namelen = PATH_MAX; /* PATH_MAX may be too long - it would
- presumably be total path, but note
- that some servers (includinng Samba 3)
- have a shorter maximum path */
+ /*
+ * PATH_MAX may be too long - it would presumably be total path,
+ * but note that some servers (includinng Samba 3) have a shorter
+ * maximum path.
+ *
+ * Instead could get the real value via SMB_QUERY_FS_ATTRIBUTE_INFO.
+ */
+ buf->f_namelen = PATH_MAX;
buf->f_files = 0; /* undefined */
buf->f_ffree = 0; /* unlimited */
-/* BB we could add a second check for a QFS Unix capability bit */
-/* BB FIXME check CIFS_POSIX_EXTENSIONS Unix cap first FIXME BB */
- if ((pTcon->ses->capabilities & CAP_UNIX) && (CIFS_POSIX_EXTENSIONS &
- le64_to_cpu(pTcon->fsUnixInfo.Capability)))
- rc = CIFSSMBQFSPosixInfo(xid, pTcon, buf);
-
- /* Only need to call the old QFSInfo if failed
- on newer one */
- if (rc)
- if (pTcon->ses->capabilities & CAP_NT_SMBS)
- rc = CIFSSMBQFSInfo(xid, pTcon, buf); /* not supported by OS2 */
-
- /* Some old Windows servers also do not support level 103, retry with
- older level one if old server failed the previous call or we
- bypassed it because we detected that this was an older LANMAN sess */
+ /*
+ * We could add a second check for a QFS Unix capability bit
+ */
+ if ((tcon->ses->capabilities & CAP_UNIX) &&
+ (CIFS_POSIX_EXTENSIONS & le64_to_cpu(tcon->fsUnixInfo.Capability)))
+ rc = CIFSSMBQFSPosixInfo(xid, tcon, buf);
+
+ /*
+ * Only need to call the old QFSInfo if failed on newer one,
+ * e.g. by OS/2.
+ **/
+ if (rc && (tcon->ses->capabilities & CAP_NT_SMBS))
+ rc = CIFSSMBQFSInfo(xid, tcon, buf);
+
+ /*
+ * Some old Windows servers also do not support level 103, retry with
+ * older level one if old server failed the previous call or we
+ * bypassed it because we detected that this was an older LANMAN sess
+ */
if (rc)
- rc = SMBOldQFSInfo(xid, pTcon, buf);
- /* int f_type;
- __fsid_t f_fsid;
- int f_namelen; */
- /* BB get from info in tcon struct at mount time call to QFSAttrInfo */
+ rc = SMBOldQFSInfo(xid, tcon, buf);
+
FreeXid(xid);
- return 0; /* always return success? what if volume is no
- longer available? */
+ return 0;
}
static int cifs_permission(struct inode *inode, int mask, struct nameidata *nd)
/* Until the file is open and we have gotten oplock
info back from the server, can not assume caching of
file data or metadata */
- cifs_inode->clientCanCacheRead = FALSE;
- cifs_inode->clientCanCacheAll = FALSE;
+ cifs_inode->clientCanCacheRead = false;
+ cifs_inode->clientCanCacheAll = false;
cifs_inode->vfs_inode.i_blkbits = 14; /* 2**14 = CIFS_MAX_MSGSIZE */
/* Can not set i_flags here - they get immediately overwritten
rc = CIFSSMBLock(0, pTcon, netfid,
0 /* len */ , 0 /* offset */, 0,
0, LOCKING_ANDX_OPLOCK_RELEASE,
- 0 /* wait flag */);
+ false /* wait flag */);
cFYI(1, ("Oplock release rc = %d", rc));
}
} else
#define ROOT_I 2
-#ifndef FALSE
-#define FALSE 0
-#endif
-
-#ifndef TRUE
-#define TRUE 1
-#endif
-
extern struct file_system_type cifs_fs_type;
extern const struct address_space_operations cifs_addr_ops;
extern const struct address_space_operations cifs_addr_ops_smallbuf;
extern const struct export_operations cifs_export_ops;
#endif /* EXPERIMENTAL */
-#define CIFS_VERSION "1.52"
+#define CIFS_VERSION "1.53"
#endif /* _CIFSFS_H */
#include "cifspdu.h"
-#ifndef FALSE
-#define FALSE 0
-#endif
-
-#ifndef TRUE
-#define TRUE 1
-#endif
-
#ifndef XATTR_DOS_ATTRIB
#define XATTR_DOS_ATTRIB "user.DOSATTRIB"
#endif
enum protocolEnum protocolType;
char versionMajor;
char versionMinor;
- unsigned svlocal:1; /* local server or remote */
+ bool svlocal:1; /* local server or remote */
atomic_t socketUseCount; /* number of open cifs sessions on socket */
atomic_t inFlight; /* number of requests on the wire to server */
#ifdef CONFIG_CIFS_STATS2
FILE_SYSTEM_DEVICE_INFO fsDevInfo;
FILE_SYSTEM_ATTRIBUTE_INFO fsAttrInfo; /* ok if fs name truncated */
FILE_SYSTEM_UNIX_INFO fsUnixInfo;
- unsigned ipc:1; /* set if connection to IPC$ eg for RPC/PIPES */
- unsigned retry:1;
- unsigned nocase:1;
- unsigned unix_ext:1; /* if off disable Linux extensions to CIFS protocol
+ bool ipc:1; /* set if connection to IPC$ eg for RPC/PIPES */
+ bool retry:1;
+ bool nocase:1;
+ bool unix_ext:1; /* if false disable Linux extensions to CIFS protocol
for this mount even if server would support */
/* BB add field for back pointer to sb struct(s)? */
};
char *srch_entries_start;
char *presume_name;
unsigned int resume_name_len;
- unsigned endOfSearch:1;
- unsigned emptyDir:1;
- unsigned unicode:1;
- unsigned smallBuf:1; /* so we know which buf_release function to call */
+ bool endOfSearch:1;
+ bool emptyDir:1;
+ bool unicode:1;
+ bool smallBuf:1; /* so we know which buf_release function to call */
};
struct cifsFileInfo {
struct inode *pInode; /* needed for oplock break */
struct mutex lock_mutex;
struct list_head llist; /* list of byte range locks we have. */
- unsigned closePend:1; /* file is marked to close */
- unsigned invalidHandle:1; /* file closed via session abend */
- unsigned messageMode:1; /* for pipes: message vs byte mode */
+ bool closePend:1; /* file is marked to close */
+ bool invalidHandle:1; /* file closed via session abend */
+ bool messageMode:1; /* for pipes: message vs byte mode */
atomic_t wrtPending; /* handle in use - defer close */
struct semaphore fh_sem; /* prevents reopen race after dead ses*/
char *search_resume_name; /* BB removeme BB */
__u32 cifsAttrs; /* e.g. DOS archive bit, sparse, compressed, system */
atomic_t inUse; /* num concurrent users (local openers cifs) of file*/
unsigned long time; /* jiffies of last update/check of inode */
- unsigned clientCanCacheRead:1; /* read oplock */
- unsigned clientCanCacheAll:1; /* read and writebehind oplock */
- unsigned oplockPending:1;
+ bool clientCanCacheRead:1; /* read oplock */
+ bool clientCanCacheAll:1; /* read and writebehind oplock */
+ bool oplockPending:1;
struct inode vfs_inode;
};
struct smb_hdr *resp_buf; /* response buffer */
int midState; /* wish this were enum but can not pass to wait_event */
__u8 command; /* smb command code */
- unsigned largeBuf:1; /* if valid response, is pointer to large buf */
- unsigned multiRsp:1; /* multiple trans2 responses for one request */
- unsigned multiEnd:1; /* both received */
+ bool largeBuf:1; /* if valid response, is pointer to large buf */
+ bool multiRsp:1; /* multiple trans2 responses for one request */
+ bool multiEnd:1; /* both received */
};
struct oplock_q_entry {
to 0xFFFF00 */
#define CIFS_UNIX_LARGE_WRITE_CAP 0x00000080
#define CIFS_UNIX_TRANSPORT_ENCRYPTION_CAP 0x00000100 /* can do SPNEGO crypt */
-#define CIFS_UNIX_TRANPSORT_ENCRYPTION_MANDATORY_CAP 0x00000200 /* must do */
+#define CIFS_UNIX_TRANSPORT_ENCRYPTION_MANDATORY_CAP 0x00000200 /* must do */
#define CIFS_UNIX_PROXY_CAP 0x00000400 /* Proxy cap: 0xACE ioctl and
QFS PROXY call */
#ifdef CONFIG_CIFS_POSIX
struct smb_hdr *out_buf,
int *bytes_returned);
extern int checkSMB(struct smb_hdr *smb, __u16 mid, unsigned int length);
-extern int is_valid_oplock_break(struct smb_hdr *smb, struct TCP_Server_Info *);
-extern int is_size_safe_to_change(struct cifsInodeInfo *, __u64 eof);
+extern bool is_valid_oplock_break(struct smb_hdr *smb,
+ struct TCP_Server_Info *);
+extern bool is_size_safe_to_change(struct cifsInodeInfo *, __u64 eof);
extern struct cifsFileInfo *find_writable_file(struct cifsInodeInfo *);
#ifdef CONFIG_CIFS_EXPERIMENTAL
extern struct cifsFileInfo *find_readable_file(struct cifsInodeInfo *);
extern unsigned int smbCalcSize_LE(struct smb_hdr *ptr);
extern int decode_negTokenInit(unsigned char *security_blob, int length,
enum securityEnum *secType);
-extern int cifs_inet_pton(int, char *source, void *dst);
+extern int cifs_inet_pton(const int, const char *source, void *dst);
extern int map_smb_to_linux_error(struct smb_hdr *smb, int logErr);
extern void header_assemble(struct smb_hdr *, char /* command */ ,
const struct cifsTconInfo *, int /* length of
#endif /* possibly unneeded function */
extern int CIFSSMBSetEOF(const int xid, struct cifsTconInfo *tcon,
const char *fileName, __u64 size,
- int setAllocationSizeFlag,
+ bool setAllocationSizeFlag,
const struct nls_table *nls_codepage,
int remap_special_chars);
extern int CIFSSMBSetFileSize(const int xid, struct cifsTconInfo *tcon,
__u64 size, __u16 fileHandle, __u32 opener_pid,
- int AllocSizeFlag);
+ bool AllocSizeFlag);
extern int CIFSSMBUnixSetPerms(const int xid, struct cifsTconInfo *pTcon,
char *full_path, __u64 mode, __u64 uid,
__u64 gid, dev_t dev,
const __u16 netfid, const __u64 len,
const __u64 offset, const __u32 numUnlock,
const __u32 numLock, const __u8 lockType,
- const int waitFlag);
+ const bool waitFlag);
extern int CIFSSMBPosixLock(const int xid, struct cifsTconInfo *tcon,
const __u16 smb_file_id, const int get_flag,
const __u64 len, struct file_lock *,
- const __u16 lock_type, const int waitFlag);
+ const __u16 lock_type, const bool waitFlag);
extern int CIFSSMBTDis(const int xid, struct cifsTconInfo *tcon);
extern int CIFSSMBLogoff(const int xid, struct cifsSesInfo *ses);
list_for_each_safe(tmp, tmp1, &pTcon->openFileList) {
open_file = list_entry(tmp, struct cifsFileInfo, tlist);
if (open_file)
- open_file->invalidHandle = TRUE;
+ open_file->invalidHandle = true;
}
write_unlock(&GlobalSMBSeslock);
/* BB Add call to invalidate_inodes(sb) for all superblocks mounted
if (tcon->ses->server->tcpStatus ==
CifsNeedReconnect) {
/* on "soft" mounts we wait once */
- if ((tcon->retry == FALSE) ||
+ if (!tcon->retry ||
(tcon->ses->status == CifsExiting)) {
cFYI(1, ("gave up waiting on "
"reconnect in smb_init"));
if (tcon->ses->server->tcpStatus ==
CifsNeedReconnect) {
/* on "soft" mounts we wait once */
- if ((tcon->retry == FALSE) ||
+ if (!tcon->retry ||
(tcon->ses->status == CifsExiting)) {
cFYI(1, ("gave up waiting on "
"reconnect in smb_init"));
CIFSSMBLock(const int xid, struct cifsTconInfo *tcon,
const __u16 smb_file_id, const __u64 len,
const __u64 offset, const __u32 numUnlock,
- const __u32 numLock, const __u8 lockType, const int waitFlag)
+ const __u32 numLock, const __u8 lockType, const bool waitFlag)
{
int rc = 0;
LOCK_REQ *pSMB = NULL;
int timeout = 0;
__u16 count;
- cFYI(1, ("CIFSSMBLock timeout %d numLock %d", waitFlag, numLock));
+ cFYI(1, ("CIFSSMBLock timeout %d numLock %d", (int)waitFlag, numLock));
rc = small_smb_init(SMB_COM_LOCKING_ANDX, 8, tcon, (void **) &pSMB);
if (rc)
if (lockType == LOCKING_ANDX_OPLOCK_RELEASE) {
timeout = CIFS_ASYNC_OP; /* no response expected */
pSMB->Timeout = 0;
- } else if (waitFlag == TRUE) {
+ } else if (waitFlag) {
timeout = CIFS_BLOCKING_OP; /* blocking operation, no timeout */
pSMB->Timeout = cpu_to_le32(-1);/* blocking - do not time out */
} else {
CIFSSMBPosixLock(const int xid, struct cifsTconInfo *tcon,
const __u16 smb_file_id, const int get_flag, const __u64 len,
struct file_lock *pLockData, const __u16 lock_type,
- const int waitFlag)
+ const bool waitFlag)
{
struct smb_com_transaction2_sfi_req *pSMB = NULL;
struct smb_com_transaction2_sfi_rsp *pSMBr = NULL;
rc = validate_t2((struct smb_t2_rsp *)pSMBr);
if (rc == 0) {
if (pSMBr->hdr.Flags2 & SMBFLG2_UNICODE)
- psrch_inf->unicode = TRUE;
+ psrch_inf->unicode = true;
else
- psrch_inf->unicode = FALSE;
+ psrch_inf->unicode = false;
psrch_inf->ntwrk_buf_start = (char *)pSMBr;
psrch_inf->smallBuf = 0;
le16_to_cpu(pSMBr->t2.ParameterOffset));
if (parms->EndofSearch)
- psrch_inf->endOfSearch = TRUE;
+ psrch_inf->endOfSearch = true;
else
- psrch_inf->endOfSearch = FALSE;
+ psrch_inf->endOfSearch = false;
psrch_inf->entries_in_buffer =
le16_to_cpu(parms->SearchCount);
cFYI(1, ("In FindNext"));
- if (psrch_inf->endOfSearch == TRUE)
+ if (psrch_inf->endOfSearch)
return -ENOENT;
rc = smb_init(SMB_COM_TRANSACTION2, 15, tcon, (void **) &pSMB,
cifs_stats_inc(&tcon->num_fnext);
if (rc) {
if (rc == -EBADF) {
- psrch_inf->endOfSearch = TRUE;
+ psrch_inf->endOfSearch = true;
rc = 0; /* search probably was closed at end of search*/
} else
cFYI(1, ("FindNext returned = %d", rc));
if (rc == 0) {
/* BB fixme add lock for file (srch_info) struct here */
if (pSMBr->hdr.Flags2 & SMBFLG2_UNICODE)
- psrch_inf->unicode = TRUE;
+ psrch_inf->unicode = true;
else
- psrch_inf->unicode = FALSE;
+ psrch_inf->unicode = false;
response_data = (char *) &pSMBr->hdr.Protocol +
le16_to_cpu(pSMBr->t2.ParameterOffset);
parms = (T2_FNEXT_RSP_PARMS *)response_data;
psrch_inf->ntwrk_buf_start = (char *)pSMB;
psrch_inf->smallBuf = 0;
if (parms->EndofSearch)
- psrch_inf->endOfSearch = TRUE;
+ psrch_inf->endOfSearch = true;
else
- psrch_inf->endOfSearch = FALSE;
+ psrch_inf->endOfSearch = false;
psrch_inf->entries_in_buffer =
le16_to_cpu(parms->SearchCount);
psrch_inf->index_of_last_entry +=
int
CIFSSMBSetEOF(const int xid, struct cifsTconInfo *tcon, const char *fileName,
- __u64 size, int SetAllocation,
+ __u64 size, bool SetAllocation,
const struct nls_table *nls_codepage, int remap)
{
struct smb_com_transaction2_spi_req *pSMB = NULL;
int
CIFSSMBSetFileSize(const int xid, struct cifsTconInfo *tcon, __u64 size,
- __u16 fid, __u32 pid_of_opener, int SetAllocation)
+ __u16 fid, __u32 pid_of_opener, bool SetAllocation)
{
struct smb_com_transaction2_sfi_req *pSMB = NULL;
char *data_offset;
#define CIFS_PORT 445
#define RFC1001_PORT 139
-static DECLARE_COMPLETION(cifsd_complete);
-
extern void SMBNTencrypt(unsigned char *passwd, unsigned char *c8,
unsigned char *p24);
mode_t file_mode;
mode_t dir_mode;
unsigned secFlg;
- unsigned rw:1;
- unsigned retry:1;
- unsigned intr:1;
- unsigned setuids:1;
- unsigned override_uid:1;
- unsigned override_gid:1;
- unsigned noperm:1;
- unsigned no_psx_acl:1; /* set if posix acl support should be disabled */
- unsigned cifs_acl:1;
- unsigned no_xattr:1; /* set if xattr (EA) support should be disabled*/
- unsigned server_ino:1; /* use inode numbers from server ie UniqueId */
- unsigned direct_io:1;
- unsigned remap:1; /* set to remap seven reserved chars in filenames */
- unsigned posix_paths:1; /* unset to not ask for posix pathnames. */
- unsigned no_linux_ext:1;
- unsigned sfu_emul:1;
- unsigned nullauth:1; /* attempt to authenticate with null user */
+ bool rw:1;
+ bool retry:1;
+ bool intr:1;
+ bool setuids:1;
+ bool override_uid:1;
+ bool override_gid:1;
+ bool noperm:1;
+ bool no_psx_acl:1; /* set if posix acl support should be disabled */
+ bool cifs_acl:1;
+ bool no_xattr:1; /* set if xattr (EA) support should be disabled*/
+ bool server_ino:1; /* use inode numbers from server ie UniqueId */
+ bool direct_io:1;
+ bool remap:1; /* set to remap seven reserved chars in filenames */
+ bool posix_paths:1; /* unset to not ask for posix pathnames. */
+ bool no_linux_ext:1;
+ bool sfu_emul:1;
+ bool nullauth:1; /* attempt to authenticate with null user */
unsigned nocase; /* request case insensitive filenames */
unsigned nobrl; /* disable sending byte range locks to srv */
unsigned int rsize;
struct task_struct *task_to_wake = NULL;
struct mid_q_entry *mid_entry;
char temp;
- int isLargeBuf = FALSE;
- int isMultiRsp;
+ bool isLargeBuf = false;
+ bool isMultiRsp;
int reconnect;
current->flags |= PF_MEMALLOC;
atomic_inc(&tcpSesAllocCount);
length = tcpSesAllocCount.counter;
write_unlock(&GlobalSMBSeslock);
- complete(&cifsd_complete);
if (length > 1)
mempool_resize(cifs_req_poolp, length + cifs_min_rcv,
GFP_KERNEL);
} else /* if existing small buf clear beginning */
memset(smallbuf, 0, sizeof(struct smb_hdr));
- isLargeBuf = FALSE;
- isMultiRsp = FALSE;
+ isLargeBuf = false;
+ isMultiRsp = false;
smb_buffer = smallbuf;
iov.iov_base = smb_buffer;
iov.iov_len = 4;
reconnect = 0;
if (pdu_length > MAX_CIFS_SMALL_BUFFER_SIZE - 4) {
- isLargeBuf = TRUE;
+ isLargeBuf = true;
memcpy(bigbuf, smallbuf, 4);
smb_buffer = bigbuf;
}
(mid_entry->command == smb_buffer->Command)) {
if (check2ndT2(smb_buffer,server->maxBuf) > 0) {
/* We have a multipart transact2 resp */
- isMultiRsp = TRUE;
+ isMultiRsp = true;
if (mid_entry->resp_buf) {
/* merge response - fix up 1st*/
if (coalesce_t2(smb_buffer,
mid_entry->resp_buf)) {
- mid_entry->multiRsp = 1;
+ mid_entry->multiRsp =
+ true;
break;
} else {
/* all parts received */
- mid_entry->multiEnd = 1;
+ mid_entry->multiEnd =
+ true;
goto multi_t2_fnd;
}
} else {
/* Have first buffer */
mid_entry->resp_buf =
smb_buffer;
- mid_entry->largeBuf = 1;
+ mid_entry->largeBuf =
+ true;
bigbuf = NULL;
}
}
break;
}
mid_entry->resp_buf = smb_buffer;
- if (isLargeBuf)
- mid_entry->largeBuf = 1;
- else
- mid_entry->largeBuf = 0;
+ mid_entry->largeBuf = isLargeBuf;
multi_t2_fnd:
task_to_wake = mid_entry->tsk;
mid_entry->midState = MID_RESPONSE_RECEIVED;
smallbuf = NULL;
}
wake_up_process(task_to_wake);
- } else if ((is_valid_oplock_break(smb_buffer, server) == FALSE)
- && (isMultiRsp == FALSE)) {
+ } else if (!is_valid_oplock_break(smb_buffer, server) &&
+ !isMultiRsp) {
cERROR(1, ("No task to wake, unknown frame received! "
"NumMids %d", midCount.counter));
cifs_dump_mem("Received Data is: ", (char *)smb_buffer,
vol->file_mode = (S_IRWXUGO | S_ISGID) & (~S_IXGRP);
/* vol->retry default is 0 (i.e. "soft" limited retry not hard retry) */
- vol->rw = TRUE;
+ vol->rw = true;
/* default is always to request posix paths. */
vol->posix_paths = 1;
} else if (strnicmp(data, "guest", 5) == 0) {
/* ignore */
} else if (strnicmp(data, "rw", 2) == 0) {
- vol->rw = TRUE;
+ vol->rw = true;
} else if ((strnicmp(data, "suid", 4) == 0) ||
(strnicmp(data, "nosuid", 6) == 0) ||
(strnicmp(data, "exec", 4) == 0) ||
is ok to just ignore them */
continue;
} else if (strnicmp(data, "ro", 2) == 0) {
- vol->rw = FALSE;
+ vol->rw = false;
} else if (strnicmp(data, "hard", 4) == 0) {
vol->retry = 1;
} else if (strnicmp(data, "soft", 4) == 0) {
"begin with // or \\\\ \n");
return 1;
}
+ value = strpbrk(vol->UNC+2, "/\\");
+ if (value)
+ *value = '\\';
} else {
printk(KERN_WARNING "CIFS: UNC name too long\n");
return 1;
{
struct list_head *tmp;
struct cifsTconInfo *tcon;
+ __be32 old_ip;
read_lock(&GlobalSMBSeslock);
+
list_for_each(tmp, &GlobalTreeConnectionList) {
cFYI(1, ("Next tcon"));
tcon = list_entry(tmp, struct cifsTconInfo, cifsConnectionList);
- if (tcon->ses) {
- if (tcon->ses->server) {
- cFYI(1,
- ("old ip addr: %x == new ip %x ?",
- tcon->ses->server->addr.sockAddr.sin_addr.
- s_addr, new_target_ip_addr));
- if (tcon->ses->server->addr.sockAddr.sin_addr.
- s_addr == new_target_ip_addr) {
- /* BB lock tcon, server and tcp session and increment use count here? */
- /* found a match on the TCP session */
- /* BB check if reconnection needed */
- cFYI(1,
- ("IP match, old UNC: %s new: %s",
- tcon->treeName, uncName));
- if (strncmp
- (tcon->treeName, uncName,
- MAX_TREE_SIZE) == 0) {
- cFYI(1,
- ("and old usr: %s new: %s",
- tcon->treeName, uncName));
- if (strncmp
- (tcon->ses->userName,
- userName,
- MAX_USERNAME_SIZE) == 0) {
- read_unlock(&GlobalSMBSeslock);
- /* matched smb session
- (user name */
- return tcon;
- }
- }
- }
- }
- }
+ if (!tcon->ses || !tcon->ses->server)
+ continue;
+
+ old_ip = tcon->ses->server->addr.sockAddr.sin_addr.s_addr;
+ cFYI(1, ("old ip addr: %x == new ip %x ?",
+ old_ip, new_target_ip_addr));
+
+ if (old_ip != new_target_ip_addr)
+ continue;
+
+ /* BB lock tcon, server, tcp session and increment use count? */
+ /* found a match on the TCP session */
+ /* BB check if reconnection needed */
+ cFYI(1, ("IP match, old UNC: %s new: %s",
+ tcon->treeName, uncName));
+
+ if (strncmp(tcon->treeName, uncName, MAX_TREE_SIZE))
+ continue;
+
+ cFYI(1, ("and old usr: %s new: %s",
+ tcon->treeName, uncName));
+
+ if (strncmp(tcon->ses->userName, userName, MAX_USERNAME_SIZE))
+ continue;
+
+ /* matched smb session (user name) */
+ read_unlock(&GlobalSMBSeslock);
+ return tcon;
}
+
read_unlock(&GlobalSMBSeslock);
return NULL;
}
kfree(srvTcp->hostname);
goto out;
}
- wait_for_completion(&cifsd_complete);
rc = 0;
memcpy(srvTcp->workstation_RFC1001_name,
volume_info.source_rfc1001_name, 16);
static int
CIFSNTLMSSPNegotiateSessSetup(unsigned int xid,
- struct cifsSesInfo *ses, int *pNTLMv2_flag,
+ struct cifsSesInfo *ses, bool *pNTLMv2_flag,
const struct nls_table *nls_codepage)
{
struct smb_hdr *smb_buffer;
if (ses == NULL)
return -EINVAL;
domain = ses->domainName;
- *pNTLMv2_flag = FALSE;
+ *pNTLMv2_flag = false;
smb_buffer = cifs_buf_get();
if (smb_buffer == NULL) {
return -ENOMEM;
CIFS_CRYPTO_KEY_SIZE);
if (SecurityBlob2->NegotiateFlags &
cpu_to_le32(NTLMSSP_NEGOTIATE_NTLMV2))
- *pNTLMv2_flag = TRUE;
+ *pNTLMv2_flag = true;
if ((SecurityBlob2->NegotiateFlags &
cpu_to_le32(NTLMSSP_NEGOTIATE_ALWAYS_SIGN))
}
static int
CIFSNTLMSSPAuthSessSetup(unsigned int xid, struct cifsSesInfo *ses,
- char *ntlm_session_key, int ntlmv2_flag,
+ char *ntlm_session_key, bool ntlmv2_flag,
const struct nls_table *nls_codepage)
{
struct smb_hdr *smb_buffer;
cifs_sb->prepathlen = 0;
cifs_sb->prepath = NULL;
kfree(tmp);
- if (ses)
- schedule_timeout_interruptible(msecs_to_jiffies(500));
if (ses)
sesInfoFree(ses);
{
int rc = 0;
char ntlm_session_key[CIFS_SESS_KEY_SIZE];
- int ntlmv2_flag = FALSE;
+ bool ntlmv2_flag = false;
int first_time = 0;
/* what if server changes its buffer size after dropping the session? */
struct cifsFileInfo *pCifsFile = NULL;
struct cifsInodeInfo *pCifsInode;
int disposition = FILE_OVERWRITE_IF;
- int write_only = FALSE;
+ bool write_only = false;
xid = GetXid();
if (oflags & FMODE_WRITE) {
desiredAccess |= GENERIC_WRITE;
if (!(oflags & FMODE_READ))
- write_only = TRUE;
+ write_only = true;
}
if ((oflags & (O_CREAT | O_EXCL)) == (O_CREAT | O_EXCL))
d_instantiate(direntry, newinode);
}
if ((nd == NULL /* nfsd case - nfs srv does not set nd */) ||
- ((nd->flags & LOOKUP_OPEN) == FALSE)) {
+ (!(nd->flags & LOOKUP_OPEN))) {
/* mknod case - do not leave file open */
CIFSSMBClose(xid, pTcon, fileHandle);
} else if (newinode) {
pCifsFile->netfid = fileHandle;
pCifsFile->pid = current->tgid;
pCifsFile->pInode = newinode;
- pCifsFile->invalidHandle = FALSE;
- pCifsFile->closePend = FALSE;
+ pCifsFile->invalidHandle = false;
+ pCifsFile->closePend = false;
init_MUTEX(&pCifsFile->fh_sem);
mutex_init(&pCifsFile->lock_mutex);
INIT_LIST_HEAD(&pCifsFile->llist);
pCifsInode = CIFS_I(newinode);
if (pCifsInode) {
/* if readable file instance put first in list*/
- if (write_only == TRUE) {
+ if (write_only) {
list_add_tail(&pCifsFile->flist,
&pCifsInode->openFileList);
} else {
&pCifsInode->openFileList);
}
if ((oplock & 0xF) == OPLOCK_EXCLUSIVE) {
- pCifsInode->clientCanCacheAll = TRUE;
- pCifsInode->clientCanCacheRead = TRUE;
+ pCifsInode->clientCanCacheAll = true;
+ pCifsInode->clientCanCacheRead = true;
cFYI(1, ("Exclusive Oplock inode %p",
newinode));
} else if ((oplock & 0xF) == OPLOCK_READ)
- pCifsInode->clientCanCacheRead = TRUE;
+ pCifsInode->clientCanCacheRead = true;
}
write_unlock(&GlobalSMBSeslock);
}
.match = user_match,
};
+/* Checks if supplied name is IP address
+ * returns:
+ * 1 - name is IP
+ * 0 - name is not IP
+ */
+static int is_ip(const char *name)
+{
+ int rc;
+ struct sockaddr_in sin_server;
+ struct sockaddr_in6 sin_server6;
+
+ rc = cifs_inet_pton(AF_INET, name,
+ &sin_server.sin_addr.s_addr);
+
+ if (rc <= 0) {
+ /* not ipv4 address, try ipv6 */
+ rc = cifs_inet_pton(AF_INET6, name,
+ &sin_server6.sin6_addr.in6_u);
+ if (rc > 0)
+ return 1;
+ } else {
+ return 1;
+ }
+ /* we failed translating address */
+ return 0;
+}
/* Resolves server name to ip address.
* input:
dns_resolve_server_name_to_ip(const char *unc, char **ip_addr)
{
int rc = -EAGAIN;
- struct key *rkey;
+ struct key *rkey = ERR_PTR(-EAGAIN);
char *name;
+ char *data = NULL;
int len;
if (!ip_addr || !unc)
memcpy(name, unc+2, len);
name[len] = 0;
+ if (is_ip(name)) {
+ cFYI(1, ("%s: it is IP, skipping dns upcall: %s",
+ __func__, name));
+ data = name;
+ goto skip_upcall;
+ }
+
rkey = request_key(&key_type_dns_resolver, name, "");
if (!IS_ERR(rkey)) {
- len = strlen(rkey->payload.data);
- *ip_addr = kmalloc(len+1, GFP_KERNEL);
- if (*ip_addr) {
- memcpy(*ip_addr, rkey->payload.data, len);
- (*ip_addr)[len] = '\0';
- cFYI(1, ("%s: resolved: %s to %s", __func__,
+ data = rkey->payload.data;
+ cFYI(1, ("%s: resolved: %s to %s", __func__,
rkey->description,
*ip_addr
));
+ } else {
+ cERROR(1, ("%s: unable to resolve: %s", __func__, name));
+ goto out;
+ }
+
+skip_upcall:
+ if (data) {
+ len = strlen(data);
+ *ip_addr = kmalloc(len+1, GFP_KERNEL);
+ if (*ip_addr) {
+ memcpy(*ip_addr, data, len);
+ (*ip_addr)[len] = '\0';
rc = 0;
} else {
rc = -ENOMEM;
}
- key_put(rkey);
- } else {
- cERROR(1, ("%s: unable to resolve: %s", __func__, name));
+ if (!IS_ERR(rkey))
+ key_put(rkey);
}
+out:
kfree(name);
return rc;
}
{
int xid;
int rc = -EINVAL;
- int oplock = FALSE;
+ int oplock = 0;
struct cifs_sb_info *cifs_sb;
struct cifsTconInfo *pTcon;
char *full_path = NULL;
INIT_LIST_HEAD(&private_data->llist);
private_data->pfile = file; /* needed for writepage */
private_data->pInode = inode;
- private_data->invalidHandle = FALSE;
- private_data->closePend = FALSE;
+ private_data->invalidHandle = false;
+ private_data->closePend = false;
/* we have to track num writers to the inode, since writepages
does not tell us which handle the write is for so there can
be a close (overlapping with write) of the filehandle that
full_path, buf, inode->i_sb, xid, NULL);
if ((*oplock & 0xF) == OPLOCK_EXCLUSIVE) {
- pCifsInode->clientCanCacheAll = TRUE;
- pCifsInode->clientCanCacheRead = TRUE;
+ pCifsInode->clientCanCacheAll = true;
+ pCifsInode->clientCanCacheRead = true;
cFYI(1, ("Exclusive Oplock granted on inode %p",
file->f_path.dentry->d_inode));
} else if ((*oplock & 0xF) == OPLOCK_READ)
- pCifsInode->clientCanCacheRead = TRUE;
+ pCifsInode->clientCanCacheRead = true;
return rc;
}
if (oplockEnabled)
oplock = REQ_OPLOCK;
else
- oplock = FALSE;
+ oplock = 0;
/* BB pass O_SYNC flag through on file attributes .. BB */
return rc;
}
-static int cifs_reopen_file(struct file *file, int can_flush)
+static int cifs_reopen_file(struct file *file, bool can_flush)
{
int rc = -EACCES;
int xid, oplock;
xid = GetXid();
down(&pCifsFile->fh_sem);
- if (pCifsFile->invalidHandle == FALSE) {
+ if (!pCifsFile->invalidHandle) {
up(&pCifsFile->fh_sem);
FreeXid(xid);
return 0;
if (oplockEnabled)
oplock = REQ_OPLOCK;
else
- oplock = FALSE;
+ oplock = 0;
/* Can not refresh inode by passing in file_info buf to be returned
by SMBOpen and then calling get_inode_info with returned buf
cFYI(1, ("oplock: %d", oplock));
} else {
pCifsFile->netfid = netfid;
- pCifsFile->invalidHandle = FALSE;
+ pCifsFile->invalidHandle = false;
up(&pCifsFile->fh_sem);
pCifsInode = CIFS_I(inode);
if (pCifsInode) {
CIFS_I(inode)->write_behind_rc = rc;
/* temporarily disable caching while we
go to server to get inode info */
- pCifsInode->clientCanCacheAll = FALSE;
- pCifsInode->clientCanCacheRead = FALSE;
+ pCifsInode->clientCanCacheAll = false;
+ pCifsInode->clientCanCacheRead = false;
if (pTcon->unix_ext)
rc = cifs_get_inode_info_unix(&inode,
full_path, inode->i_sb, xid);
we can not go to the server to get the new inod
info */
if ((oplock & 0xF) == OPLOCK_EXCLUSIVE) {
- pCifsInode->clientCanCacheAll = TRUE;
- pCifsInode->clientCanCacheRead = TRUE;
+ pCifsInode->clientCanCacheAll = true;
+ pCifsInode->clientCanCacheRead = true;
cFYI(1, ("Exclusive Oplock granted on inode %p",
file->f_path.dentry->d_inode));
} else if ((oplock & 0xF) == OPLOCK_READ) {
- pCifsInode->clientCanCacheRead = TRUE;
- pCifsInode->clientCanCacheAll = FALSE;
+ pCifsInode->clientCanCacheRead = true;
+ pCifsInode->clientCanCacheAll = false;
} else {
- pCifsInode->clientCanCacheRead = FALSE;
- pCifsInode->clientCanCacheAll = FALSE;
+ pCifsInode->clientCanCacheRead = false;
+ pCifsInode->clientCanCacheAll = false;
}
cifs_relock_file(pCifsFile);
}
if (pSMBFile) {
struct cifsLockInfo *li, *tmp;
- pSMBFile->closePend = TRUE;
+ pSMBFile->closePend = true;
if (pTcon) {
/* no sense reconnecting to close a file that is
already closed */
cFYI(1, ("closing last open instance for inode %p", inode));
/* if the file is not open we do not know if we can cache info
on this inode, much less write behind and read ahead */
- CIFS_I(inode)->clientCanCacheRead = FALSE;
- CIFS_I(inode)->clientCanCacheAll = FALSE;
+ CIFS_I(inode)->clientCanCacheRead = false;
+ CIFS_I(inode)->clientCanCacheAll = false;
}
read_unlock(&GlobalSMBSeslock);
if ((rc == 0) && CIFS_I(inode)->write_behind_rc)
pTcon = cifs_sb->tcon;
cFYI(1, ("Freeing private data in close dir"));
- if ((pCFileStruct->srch_inf.endOfSearch == FALSE) &&
- (pCFileStruct->invalidHandle == FALSE)) {
- pCFileStruct->invalidHandle = TRUE;
+ if (!pCFileStruct->srch_inf.endOfSearch &&
+ !pCFileStruct->invalidHandle) {
+ pCFileStruct->invalidHandle = true;
rc = CIFSFindClose(xid, pTcon, pCFileStruct->netfid);
cFYI(1, ("Closing uncompleted readdir with rc %d",
rc));
__u32 numLock = 0;
__u32 numUnlock = 0;
__u64 length;
- int wait_flag = FALSE;
+ bool wait_flag = false;
struct cifs_sb_info *cifs_sb;
struct cifsTconInfo *pTcon;
__u16 netfid;
__u8 lockType = LOCKING_ANDX_LARGE_FILES;
- int posix_locking;
+ bool posix_locking;
length = 1 + pfLock->fl_end - pfLock->fl_start;
rc = -EACCES;
cFYI(1, ("Flock"));
if (pfLock->fl_flags & FL_SLEEP) {
cFYI(1, ("Blocking lock"));
- wait_flag = TRUE;
+ wait_flag = true;
}
if (pfLock->fl_flags & FL_ACCESS)
cFYI(1, ("Process suspended by mandatory locking - "
stored_rc = CIFSSMBLock(xid, pTcon,
netfid,
li->length, li->offset,
- 1, 0, li->type, FALSE);
+ 1, 0, li->type, false);
if (stored_rc)
rc = stored_rc;
filemap_fdatawait from here so tell
reopen_file not to flush data to server
now */
- rc = cifs_reopen_file(file, FALSE);
+ rc = cifs_reopen_file(file, false);
if (rc != 0)
break;
}
filemap_fdatawait from here so tell
reopen_file not to flush data to
server now */
- rc = cifs_reopen_file(file, FALSE);
+ rc = cifs_reopen_file(file, false);
if (rc != 0)
break;
}
read_unlock(&GlobalSMBSeslock);
/* Had to unlock since following call can block */
- rc = cifs_reopen_file(open_file->pfile, FALSE);
+ rc = cifs_reopen_file(open_file->pfile, false);
if (!rc) {
if (!open_file->closePend)
return open_file;
int buf_type = CIFS_NO_BUFFER;
if ((open_file->invalidHandle) &&
(!open_file->closePend)) {
- rc = cifs_reopen_file(file, TRUE);
+ rc = cifs_reopen_file(file, true);
if (rc != 0)
break;
}
while (rc == -EAGAIN) {
if ((open_file->invalidHandle) &&
(!open_file->closePend)) {
- rc = cifs_reopen_file(file, TRUE);
+ rc = cifs_reopen_file(file, true);
if (rc != 0)
break;
}
while (rc == -EAGAIN) {
if ((open_file->invalidHandle) &&
(!open_file->closePend)) {
- rc = cifs_reopen_file(file, TRUE);
+ rc = cifs_reopen_file(file, true);
if (rc != 0)
break;
}
refreshing the inode only on increases in the file size
but this is tricky to do without racing with writebehind
page caching in the current Linux kernel design */
-int is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 end_of_file)
+bool is_size_safe_to_change(struct cifsInodeInfo *cifsInode, __u64 end_of_file)
{
if (!cifsInode)
- return 1;
+ return true;
if (is_inode_writable(cifsInode)) {
/* This inode is open for write at least once */
if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_DIRECT_IO) {
/* since no page cache to corrupt on directio
we can change size safely */
- return 1;
+ return true;
}
if (i_size_read(&cifsInode->vfs_inode) < end_of_file)
- return 1;
+ return true;
- return 0;
+ return false;
} else
- return 1;
+ return true;
}
static int cifs_prepare_write(struct file *file, struct page *page,
struct cifs_sb_info *cifs_sb, int xid)
{
int rc;
- int oplock = FALSE;
+ int oplock = 0;
__u16 netfid;
struct cifsTconInfo *pTcon = cifs_sb->tcon;
char buf[24];
struct cifs_sb_info *cifs_sb = CIFS_SB(sb);
const unsigned char *full_path = NULL;
char *buf = NULL;
- int adjustTZ = FALSE;
+ bool adjustTZ = false;
bool is_dfs_referral = false;
pTcon = cifs_sb->tcon;
pfindData, cifs_sb->local_nls,
cifs_sb->mnt_cifs_flags &
CIFS_MOUNT_MAP_SPECIAL_CHR);
- adjustTZ = TRUE;
+ adjustTZ = true;
}
}
/* dump_mem("\nQPathInfo return data",&findData, sizeof(findData)); */
} else if (rc == -ENOENT) {
d_drop(direntry);
} else if (rc == -ETXTBSY) {
- int oplock = FALSE;
+ int oplock = 0;
__u16 netfid;
rc = CIFSSMBOpen(xid, pTcon, full_path, FILE_OPEN, DELETE,
rc = -EOPNOTSUPP;
if (rc == -EOPNOTSUPP) {
- int oplock = FALSE;
+ int oplock = 0;
__u16 netfid;
/* rc = CIFSSMBSetAttrLegacy(xid, pTcon,
full_path,
if (direntry->d_inode)
drop_nlink(direntry->d_inode);
} else if (rc == -ETXTBSY) {
- int oplock = FALSE;
+ int oplock = 0;
__u16 netfid;
rc = CIFSSMBOpen(xid, pTcon, full_path,
cFYI(1, ("rename rc %d", rc));
if ((rc == -EIO) || (rc == -EEXIST)) {
- int oplock = FALSE;
+ int oplock = 0;
__u16 netfid;
/* BB FIXME Is Generic Read correct for rename? */
struct cifsInodeInfo *cifsInode;
loff_t local_size;
struct timespec local_mtime;
- int invalidate_inode = FALSE;
+ bool invalidate_inode = false;
if (direntry->d_inode == NULL)
return -ENOENT;
only ones who could have modified the file and the
server copy is staler than ours */
} else {
- invalidate_inode = TRUE;
+ invalidate_inode = true;
}
}
int rc = -EACCES;
struct cifsFileInfo *open_file = NULL;
FILE_BASIC_INFO time_buf;
- int set_time = FALSE;
- int set_dosattr = FALSE;
+ bool set_time = false;
+ bool set_dosattr = false;
__u64 mode = 0xFFFFFFFFFFFFFFFFULL;
__u64 uid = 0xFFFFFFFFFFFFFFFFULL;
__u64 gid = 0xFFFFFFFFFFFFFFFFULL;
__u16 nfid = open_file->netfid;
__u32 npid = open_file->pid;
rc = CIFSSMBSetFileSize(xid, pTcon, attrs->ia_size,
- nfid, npid, FALSE);
+ nfid, npid, false);
atomic_dec(&open_file->wrtPending);
cFYI(1, ("SetFSize for attrs rc = %d", rc));
if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) {
it was found or because there was an error setting
it by handle */
rc = CIFSSMBSetEOF(xid, pTcon, full_path,
- attrs->ia_size, FALSE,
+ attrs->ia_size, false,
cifs_sb->local_nls,
cifs_sb->mnt_cifs_flags &
CIFS_MOUNT_MAP_SPECIAL_CHR);
cFYI(1, ("SetEOF by path (setattrs) rc = %d", rc));
if ((rc == -EINVAL) || (rc == -EOPNOTSUPP)) {
__u16 netfid;
- int oplock = FALSE;
+ int oplock = 0;
rc = SMBLegacyOpen(xid, pTcon, full_path,
FILE_OPEN,
/* Server is ok setting allocation size implicitly - no need
to call:
- CIFSSMBSetEOF(xid, pTcon, full_path, attrs->ia_size, TRUE,
+ CIFSSMBSetEOF(xid, pTcon, full_path, attrs->ia_size, true,
cifs_sb->local_nls);
*/
#endif
/* not writeable */
if ((cifsInode->cifsAttrs & ATTR_READONLY) == 0) {
- set_dosattr = TRUE;
+ set_dosattr = true;
time_buf.Attributes =
cpu_to_le32(cifsInode->cifsAttrs |
ATTR_READONLY);
not be able to write to it - so if any write
bit is enabled for user or group or other we
need to at least try to remove r/o dos attr */
- set_dosattr = TRUE;
+ set_dosattr = true;
time_buf.Attributes = cpu_to_le32(cifsInode->cifsAttrs &
(~ATTR_READONLY));
/* Windows ignores set to zero */
if (time_buf.Attributes == 0)
time_buf.Attributes |= cpu_to_le32(ATTR_NORMAL);
}
-#ifdef CONFIG_CIFS_EXPERIMENTAL
- if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL)
- mode_to_acl(direntry->d_inode, full_path, mode);
-#endif
}
if (attrs->ia_valid & ATTR_ATIME) {
- set_time = TRUE;
+ set_time = true;
time_buf.LastAccessTime =
cpu_to_le64(cifs_UnixTimeToNT(attrs->ia_atime));
} else
time_buf.LastAccessTime = 0;
if (attrs->ia_valid & ATTR_MTIME) {
- set_time = TRUE;
+ set_time = true;
time_buf.LastWriteTime =
cpu_to_le64(cifs_UnixTimeToNT(attrs->ia_mtime));
} else
server times */
if (set_time && (attrs->ia_valid & ATTR_CTIME)) {
- set_time = TRUE;
+ set_time = true;
/* Although Samba throws this field away
it may be useful to Windows - but we do
not want to set ctime unless some other
rc = -EOPNOTSUPP;
if (rc == -EOPNOTSUPP) {
- int oplock = FALSE;
+ int oplock = 0;
__u16 netfid;
cFYI(1, ("calling SetFileInfo since SetPathInfo for "
struct inode *inode = direntry->d_inode;
int rc = -EACCES;
int xid;
- int oplock = FALSE;
+ int oplock = 0;
struct cifs_sb_info *cifs_sb;
struct cifsTconInfo *pTcon;
char *full_path = NULL;
}
return 0;
}
-int
+
+bool
is_valid_oplock_break(struct smb_hdr *buf, struct TCP_Server_Info *srv)
{
struct smb_com_lock_req *pSMB = (struct smb_com_lock_req *)buf;
pnotify->Action)); /* BB removeme BB */
/* cifs_dump_mem("Rcvd notify Data: ",buf,
sizeof(struct smb_hdr)+60); */
- return TRUE;
+ return true;
}
if (pSMBr->hdr.Status.CifsError) {
cFYI(1, ("notify err 0x%d",
pSMBr->hdr.Status.CifsError));
- return TRUE;
+ return true;
}
- return FALSE;
+ return false;
}
if (pSMB->hdr.Command != SMB_COM_LOCKING_ANDX)
- return FALSE;
+ return false;
if (pSMB->hdr.Flags & SMBFLG_RESPONSE) {
/* no sense logging error on invalid handle on oplock
break - harmless race between close request and oplock
if ((NT_STATUS_INVALID_HANDLE) ==
le32_to_cpu(pSMB->hdr.Status.CifsError)) {
cFYI(1, ("invalid handle on oplock break"));
- return TRUE;
+ return true;
} else if (ERRbadfid ==
le16_to_cpu(pSMB->hdr.Status.DosError.Error)) {
- return TRUE;
+ return true;
} else {
- return FALSE; /* on valid oplock brk we get "request" */
+ return false; /* on valid oplock brk we get "request" */
}
}
if (pSMB->hdr.WordCount != 8)
- return FALSE;
+ return false;
cFYI(1, ("oplock type 0x%d level 0x%d",
pSMB->LockType, pSMB->OplockLevel));
if (!(pSMB->LockType & LOCKING_ANDX_OPLOCK_RELEASE))
- return FALSE;
+ return false;
/* look up tcon based on tid & uid */
read_lock(&GlobalSMBSeslock);
("file id match, oplock break"));
pCifsInode =
CIFS_I(netfile->pInode);
- pCifsInode->clientCanCacheAll = FALSE;
+ pCifsInode->clientCanCacheAll = false;
if (pSMB->OplockLevel == 0)
pCifsInode->clientCanCacheRead
- = FALSE;
- pCifsInode->oplockPending = TRUE;
+ = false;
+ pCifsInode->oplockPending = true;
AllocOplockQEntry(netfile->pInode,
netfile->netfid,
tcon);
("about to wake up oplock thread"));
if (oplockThread)
wake_up_process(oplockThread);
- return TRUE;
+ return true;
}
}
read_unlock(&GlobalSMBSeslock);
cFYI(1, ("No matching file for oplock break"));
- return TRUE;
+ return true;
}
}
read_unlock(&GlobalSMBSeslock);
cFYI(1, ("Can not process oplock break for non-existent connection"));
- return TRUE;
+ return true;
}
void
{0, 0}
};
-
-/* if the mount helper is missing we need to reverse the 1st slash
- from '/' to backslash in order to format the UNC properly for
- ip address parsing and for tree connect (unless the user
- remembered to put the UNC name in properly). Fortunately we do
- not have to call this twice (we check for IPv4 addresses
- first, so it is already converted by the time we
- try IPv6 addresses */
-static int canonicalize_unc(char *cp)
-{
- int i;
-
- for (i = 0; i <= 46 /* INET6_ADDRSTRLEN */ ; i++) {
- if (cp[i] == 0)
- break;
- if (cp[i] == '\\')
- break;
- if (cp[i] == '/') {
- cFYI(DBG2, ("change slash to \\ in malformed UNC"));
- cp[i] = '\\';
- return 1;
- }
- }
- return 0;
-}
-
/* Convert string containing dotted ip address to binary form */
/* returns 0 if invalid address */
int
-cifs_inet_pton(int address_family, char *cp, void *dst)
+cifs_inet_pton(const int address_family, const char *cp, void *dst)
{
int ret = 0;
/* calculate length by finding first slash or NULL */
if (address_family == AF_INET) {
ret = in4_pton(cp, -1 /* len */, dst, '\\', NULL);
- if (ret == 0) {
- if (canonicalize_unc(cp))
- ret = in4_pton(cp, -1, dst, '\\', NULL);
- }
} else if (address_family == AF_INET6) {
ret = in6_pton(cp, -1 /* len */, dst , '\\', NULL);
}
if (file->private_data == NULL)
return -ENOMEM;
cifsFile = file->private_data;
- cifsFile->invalidHandle = TRUE;
- cifsFile->srch_inf.endOfSearch = FALSE;
+ cifsFile->invalidHandle = true;
+ cifsFile->srch_inf.endOfSearch = false;
cifs_sb = CIFS_SB(file->f_path.dentry->d_sb);
if (cifs_sb == NULL)
cifs_sb->mnt_cifs_flags &
CIFS_MOUNT_MAP_SPECIAL_CHR, CIFS_DIR_SEP(cifs_sb));
if (rc == 0)
- cifsFile->invalidHandle = FALSE;
+ cifsFile->invalidHandle = false;
if ((rc == -EOPNOTSUPP) &&
(cifs_sb->mnt_cifs_flags & CIFS_MOUNT_SERVER_INUM)) {
cifs_sb->mnt_cifs_flags &= ~CIFS_MOUNT_SERVER_INUM;
(index_to_find < first_entry_in_buffer)) {
/* close and restart search */
cFYI(1, ("search backing up - close and restart search"));
- cifsFile->invalidHandle = TRUE;
+ cifsFile->invalidHandle = true;
CIFSFindClose(xid, pTcon, cifsFile->netfid);
kfree(cifsFile->search_resume_name);
cifsFile->search_resume_name = NULL;
}
while ((index_to_find >= cifsFile->srch_inf.index_of_last_entry) &&
- (rc == 0) && (cifsFile->srch_inf.endOfSearch == FALSE)) {
+ (rc == 0) && !cifsFile->srch_inf.endOfSearch) {
cFYI(1, ("calling findnext2"));
rc = CIFSFindNext(xid, pTcon, cifsFile->netfid,
&cifsFile->srch_inf);
break;
}
} /* else {
- cifsFile->invalidHandle = TRUE;
+ cifsFile->invalidHandle = true;
CIFSFindClose(xid, pTcon, cifsFile->netfid);
}
kfree(cifsFile->search_resume_name);
#include "cifs_debug.h"
#include "cifsencrypt.h"
-#ifndef FALSE
-#define FALSE 0
+#ifndef false
+#define false 0
#endif
-#ifndef TRUE
-#define TRUE 1
+#ifndef true
+#define true 1
#endif
/* following came from the other byteorder.h to avoid include conflicts */
#ifdef CONFIG_CIFS_EXPERIMENTAL
else if (cifs_sb->mnt_cifs_flags & CIFS_MOUNT_CIFS_ACL) {
__u16 fid;
- int oplock = FALSE;
+ int oplock = 0;
struct cifs_ntsd *pacl = NULL;
__u32 buflen = 0;
if (experimEnabled)
* give it the opportunity to lock the file.
*/
if (found)
- cond_resched();
+ cond_resched_bkl();
find_conflict:
for_each_lock(inode, before) {
#include <linux/highmem.h>
#include <linux/pagemap.h>
#include <linux/audit.h>
+#include <linux/syscalls.h>
#include <asm/uaccess.h>
#include <asm/ioctls.h>
error = do_pipe(fd);
if (!error) {
- if (copy_to_user(fildes, fd, sizeof(fd)))
+ if (copy_to_user(fildes, fd, sizeof(fd))) {
+ sys_close(fd[0]);
+ sys_close(fd[1]);
error = -EFAULT;
+ }
}
return error;
}
#include <linux/highmem.h>
#include <linux/ptrace.h>
#include <linux/pagemap.h>
-#include <linux/ptrace.h>
#include <linux/mempolicy.h>
#include <linux/swap.h>
#include <linux/swapops.h>
-#include <linux/seq_file.h>
#include <asm/elf.h>
#include <asm/uaccess.h>
{
struct address_space *mapping = out->f_mapping;
struct inode *inode = mapping->host;
- int killsuid, killpriv;
+ struct splice_desc sd = {
+ .total_len = len,
+ .flags = flags,
+ .pos = *ppos,
+ .u.file = out,
+ };
ssize_t ret;
- int err = 0;
-
- killpriv = security_inode_need_killpriv(out->f_path.dentry);
- killsuid = should_remove_suid(out->f_path.dentry);
- if (unlikely(killsuid || killpriv)) {
- mutex_lock(&inode->i_mutex);
- if (killpriv)
- err = security_inode_killpriv(out->f_path.dentry);
- if (!err && killsuid)
- err = __remove_suid(out->f_path.dentry, killsuid);
- mutex_unlock(&inode->i_mutex);
- if (err)
- return err;
- }
- ret = splice_from_pipe(pipe, out, ppos, len, flags, pipe_to_file);
+ inode_double_lock(inode, pipe->inode);
+ ret = remove_suid(out->f_path.dentry);
+ if (likely(!ret))
+ ret = __splice_from_pipe(pipe, &sd, pipe_to_file);
+ inode_double_unlock(inode, pipe->inode);
if (ret > 0) {
unsigned long nr_pages;
* sync it.
*/
if (unlikely((out->f_flags & O_SYNC) || IS_SYNC(inode))) {
+ int err;
+
mutex_lock(&inode->i_mutex);
err = generic_osync_inode(inode, mapping,
OSYNC_METADATA|OSYNC_DATA);
ret = splice_direct_to_actor(in, &sd, direct_splice_actor);
if (ret > 0)
- *ppos = sd.pos;
+ *ppos += ret;
return ret;
}
#include <linux/buffer_head.h>
#include <linux/sched.h>
#include <linux/crc-itu-t.h>
+#include <linux/exportfs.h>
static inline int udf_match(int len1, const char *name1, int len2,
const char *name2)
sector_t offset;
struct extent_position epos = {};
struct udf_inode_info *dinfo = UDF_I(dir);
+ int isdotdot = dentry->d_name.len == 2 &&
+ dentry->d_name.name[0] == '.' && dentry->d_name.name[1] == '.';
size = udf_ext0_offset(dir) + dir->i_size;
f_pos = udf_ext0_offset(dir);
continue;
}
+ if ((cfi->fileCharacteristics & FID_FILE_CHAR_PARENT) &&
+ isdotdot) {
+ brelse(epos.bh);
+ return fi;
+ }
+
if (!lfi)
continue;
}
}
unlock_kernel();
- d_add(dentry, inode);
- return NULL;
+ return d_splice_alias(inode, dentry);
}
static struct fileIdentDesc *udf_add_entry(struct inode *dir,
uint16_t liu;
int block;
kernel_lb_addr eloc;
- uint32_t elen;
+ uint32_t elen = 0;
sector_t offset;
struct extent_position epos = {};
struct udf_inode_info *dinfo;
}
add:
- if (dinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB) {
+ /* Is there any extent whose size we need to round up? */
+ if (dinfo->i_alloc_type != ICBTAG_FLAG_AD_IN_ICB && elen) {
elen = (elen + sb->s_blocksize - 1) & ~(sb->s_blocksize - 1);
if (dinfo->i_alloc_type == ICBTAG_FLAG_AD_SHORT)
epos.offset -= sizeof(short_ad);
return retval;
}
+static struct dentry *udf_get_parent(struct dentry *child)
+{
+ struct dentry *parent;
+ struct inode *inode = NULL;
+ struct dentry dotdot;
+ struct fileIdentDesc cfi;
+ struct udf_fileident_bh fibh;
+
+ dotdot.d_name.name = "..";
+ dotdot.d_name.len = 2;
+
+ lock_kernel();
+ if (!udf_find_entry(child->d_inode, &dotdot, &fibh, &cfi))
+ goto out_unlock;
+
+ if (fibh.sbh != fibh.ebh)
+ brelse(fibh.ebh);
+ brelse(fibh.sbh);
+
+ inode = udf_iget(child->d_inode->i_sb,
+ lelb_to_cpu(cfi.icb.extLocation));
+ if (!inode)
+ goto out_unlock;
+ unlock_kernel();
+
+ parent = d_alloc_anon(inode);
+ if (!parent) {
+ iput(inode);
+ parent = ERR_PTR(-ENOMEM);
+ }
+
+ return parent;
+out_unlock:
+ unlock_kernel();
+ return ERR_PTR(-EACCES);
+}
+
+
+static struct dentry *udf_nfs_get_inode(struct super_block *sb, u32 block,
+ u16 partref, __u32 generation)
+{
+ struct inode *inode;
+ struct dentry *result;
+ kernel_lb_addr loc;
+
+ if (block == 0)
+ return ERR_PTR(-ESTALE);
+
+ loc.logicalBlockNum = block;
+ loc.partitionReferenceNum = partref;
+ inode = udf_iget(sb, loc);
+
+ if (inode == NULL)
+ return ERR_PTR(-ENOMEM);
+
+ if (generation && inode->i_generation != generation) {
+ iput(inode);
+ return ERR_PTR(-ESTALE);
+ }
+ result = d_alloc_anon(inode);
+ if (!result) {
+ iput(inode);
+ return ERR_PTR(-ENOMEM);
+ }
+ return result;
+}
+
+static struct dentry *udf_fh_to_dentry(struct super_block *sb,
+ struct fid *fid, int fh_len, int fh_type)
+{
+ if ((fh_len != 3 && fh_len != 5) ||
+ (fh_type != FILEID_UDF_WITH_PARENT &&
+ fh_type != FILEID_UDF_WITHOUT_PARENT))
+ return NULL;
+
+ return udf_nfs_get_inode(sb, fid->udf.block, fid->udf.partref,
+ fid->udf.generation);
+}
+
+static struct dentry *udf_fh_to_parent(struct super_block *sb,
+ struct fid *fid, int fh_len, int fh_type)
+{
+ if (fh_len != 5 || fh_type != FILEID_UDF_WITH_PARENT)
+ return NULL;
+
+ return udf_nfs_get_inode(sb, fid->udf.parent_block,
+ fid->udf.parent_partref,
+ fid->udf.parent_generation);
+}
+static int udf_encode_fh(struct dentry *de, __u32 *fh, int *lenp,
+ int connectable)
+{
+ int len = *lenp;
+ struct inode *inode = de->d_inode;
+ kernel_lb_addr location = UDF_I(inode)->i_location;
+ struct fid *fid = (struct fid *)fh;
+ int type = FILEID_UDF_WITHOUT_PARENT;
+
+ if (len < 3 || (connectable && len < 5))
+ return 255;
+
+ *lenp = 3;
+ fid->udf.block = location.logicalBlockNum;
+ fid->udf.partref = location.partitionReferenceNum;
+ fid->udf.generation = inode->i_generation;
+
+ if (connectable && !S_ISDIR(inode->i_mode)) {
+ spin_lock(&de->d_lock);
+ inode = de->d_parent->d_inode;
+ location = UDF_I(inode)->i_location;
+ fid->udf.parent_block = location.logicalBlockNum;
+ fid->udf.parent_partref = location.partitionReferenceNum;
+ fid->udf.parent_generation = inode->i_generation;
+ spin_unlock(&de->d_lock);
+ *lenp = 5;
+ type = FILEID_UDF_WITH_PARENT;
+ }
+
+ return type;
+}
+
+const struct export_operations udf_export_ops = {
+ .encode_fh = udf_encode_fh,
+ .fh_to_dentry = udf_fh_to_dentry,
+ .fh_to_parent = udf_fh_to_parent,
+ .get_parent = udf_get_parent,
+};
+
const struct inode_operations udf_dir_inode_operations = {
.lookup = udf_lookup,
.create = udf_create,
#include <linux/slab.h>
#include <linux/buffer_head.h>
-inline uint32_t udf_get_pblock(struct super_block *sb, uint32_t block,
- uint16_t partition, uint32_t offset)
+uint32_t udf_get_pblock(struct super_block *sb, uint32_t block,
+ uint16_t partition, uint32_t offset)
{
struct udf_sb_info *sbi = UDF_SB(sb);
struct udf_part_map *map;
/* Fill in the rest of the superblock */
sb->s_op = &udf_sb_ops;
+ sb->s_export_op = &udf_export_ops;
sb->dq_op = NULL;
sb->s_dirt = 0;
sb->s_magic = UDF_SUPER_MAGIC;
struct buffer_head;
struct super_block;
+extern const struct export_operations udf_export_ops;
extern const struct inode_operations udf_dir_inode_operations;
extern const struct file_operations udf_dir_operations;
extern const struct inode_operations udf_file_inode_operations;
#include <linux/suspend.h>
struct pxa_cpu_pm_fns {
- int save_size;
+ int save_count;
void (*save)(unsigned long *);
void (*restore)(unsigned long *);
int (*valid)(suspend_state_t state);
static inline void arch_reset(char mode)
{
- RCSR = RCSR_HWR | RCSR_WDR | RCSR_SMR | RCSR_GPR;
+ if (cpu_is_pxa2xx())
+ RCSR = RCSR_HWR | RCSR_WDR | RCSR_SMR | RCSR_GPR;
if (mode == 's') {
/* Jump into ROM at address 0 */
/*
* include/asm-blackfin/dpmc.h - Miscellaneous IOCTL commands for Dynamic Power
* Management Controller Driver.
- * Copyright (C) 2004 Analog Device Inc.
+ * Copyright (C) 2004-2008 Analog Device Inc.
*
*/
#ifndef _BLACKFIN_DPMC_H_
extern unsigned long get_cclk(void);
extern unsigned long get_sclk(void);
+struct bfin_dpmc_platform_data {
+ const unsigned int *tuple_tab;
+ unsigned short tabsize;
+ unsigned short vr_settling_time; /* in us */
+};
+
+#define VRPAIR(vlev, freq) (((vlev) << 16) | ((freq) >> 16))
+
#endif /* __KERNEL__ */
#endif /*_BLACKFIN_DPMC_H_*/
#define PF_DTRACE_OFF 1
#define PF_DTRACE_BIT 5
+/*
+ * NOTE! The single-stepping code assumes that all interrupt handlers
+ * start by saving SYSCFG on the stack with their first instruction.
+ */
+
/* This one is used for exceptions, emulation, and NMI. It doesn't push
RETI and doesn't do cli. */
#define SAVE_ALL_SYS save_context_no_interrupts
#define UART_PUT_CHAR(uart, v) bfin_write16(((uart)->port.membase + OFFSET_THR), v)
#define UART_PUT_DLL(uart, v) bfin_write16(((uart)->port.membase + OFFSET_DLL), v)
#define UART_PUT_IER(uart, v) bfin_write16(((uart)->port.membase + OFFSET_IER), v)
+#define UART_SET_IER(uart, v) UART_PUT_IER(uart, UART_GET_IER(uart) | (v))
+#define UART_CLEAR_IER(uart, v) UART_PUT_IER(uart, UART_GET_IER(uart) & ~(v))
#define UART_PUT_DLH(uart, v) bfin_write16(((uart)->port.membase + OFFSET_DLH), v)
#define UART_PUT_LCR(uart, v) bfin_write16(((uart)->port.membase + OFFSET_LCR), v)
#define UART_PUT_GCTL(uart, v) bfin_write16(((uart)->port.membase + OFFSET_GCTL), v)
+#define UART_SET_DLAB(uart) do { UART_PUT_LCR(uart, UART_GET_LCR(uart) | DLAB); SSYNC(); } while (0)
+#define UART_CLEAR_DLAB(uart) do { UART_PUT_LCR(uart, UART_GET_LCR(uart) & ~DLAB); SSYNC(); } while (0)
+
#if defined(CONFIG_BFIN_UART0_CTSRTS) || defined(CONFIG_BFIN_UART1_CTSRTS)
# define CONFIG_SERIAL_BFIN_CTSRTS
#define UART_PUT_CHAR(uart,v) bfin_write16(((uart)->port.membase + OFFSET_THR),v)
#define UART_PUT_DLL(uart,v) bfin_write16(((uart)->port.membase + OFFSET_DLL),v)
#define UART_PUT_IER(uart,v) bfin_write16(((uart)->port.membase + OFFSET_IER),v)
+#define UART_SET_IER(uart,v) UART_PUT_IER(uart, UART_GET_IER(uart) | (v))
+#define UART_CLEAR_IER(uart,v) UART_PUT_IER(uart, UART_GET_IER(uart) & ~(v))
#define UART_PUT_DLH(uart,v) bfin_write16(((uart)->port.membase + OFFSET_DLH),v)
#define UART_PUT_LCR(uart,v) bfin_write16(((uart)->port.membase + OFFSET_LCR),v)
#define UART_PUT_GCTL(uart,v) bfin_write16(((uart)->port.membase + OFFSET_GCTL),v)
+#define UART_SET_DLAB(uart) do { UART_PUT_LCR(uart, UART_GET_LCR(uart) | DLAB); SSYNC(); } while (0)
+#define UART_CLEAR_DLAB(uart) do { UART_PUT_LCR(uart, UART_GET_LCR(uart) & ~DLAB); SSYNC(); } while (0)
+
#ifdef CONFIG_BFIN_UART0_CTSRTS
# define CONFIG_SERIAL_BFIN_CTSRTS
# ifndef CONFIG_UART0_CTS_PIN
#define VLEV_110 0x00B0 /* VLEV = 1.10 V (-5% - +10% Accuracy) */
#define VLEV_115 0x00C0 /* VLEV = 1.15 V (-5% - +10% Accuracy) */
#define VLEV_120 0x00D0 /* VLEV = 1.20 V (-5% - +10% Accuracy) */
+#define VLEV_125 0x00E0 /* VLEV = 1.25 V (-5% - +10% Accuracy) */
+#define VLEV_130 0x00F0 /* VLEV = 1.30 V (-5% - +10% Accuracy) */
#define WAKE 0x0100 /* Enable RTC/Reset Wakeup From Hibernate */
#define SCKELOW 0x8000 /* Do Not Drive SCKE High During Reset After Hibernate */
DMA8/9 Interrupt IVG13 28
DMA10/11 Interrupt IVG13 29
Watchdog Timer IVG13 30
- Software Interrupt 1 IVG14 31
- Software Interrupt 2 --
+
+ Softirq IVG14 31
+ System Call --
(lowest priority) IVG15 32 *
*/
-#define SYS_IRQS 32
-#define NR_PERI_INTS 24
+#define SYS_IRQS 31
+#define NR_PERI_INTS 24
/* The ABSTRACT IRQ definitions */
/** the first seven of the following are fixed, the rest you change if you need to **/
#define IRQ_SPORT0_TX 17 /*DMA2 Interrupt (SPORT0 TX) */
#define IRQ_SPORT1_RX 18 /*DMA3 Interrupt (SPORT1 RX) */
#define IRQ_SPORT1_TX 19 /*DMA4 Interrupt (SPORT1 TX) */
-#define IRQ_SPI 20 /*DMA5 Interrupt (SPI) */
+#define IRQ_SPI 20 /*DMA5 Interrupt (SPI) */
#define IRQ_UART_RX 21 /*DMA6 Interrupt (UART RX) */
#define IRQ_UART_TX 22 /*DMA7 Interrupt (UART TX) */
#define IRQ_TMR0 23 /*Timer 0 */
#define IRQ_MEM_DMA1 29 /*DMA10/11 Interrupt (Memory DMA Stream 1) */
#define IRQ_WATCH 30 /*Watch Dog Timer */
-#define IRQ_SW_INT1 31 /*Software Int 1 */
-#define IRQ_SW_INT2 32 /*Software Int 2 (reserved for SYSCALL) */
-
#define IRQ_PF0 33
#define IRQ_PF1 34
#define IRQ_PF2 35
#define UART_PUT_CHAR(uart,v) bfin_write16(((uart)->port.membase + OFFSET_THR),v)
#define UART_PUT_DLL(uart,v) bfin_write16(((uart)->port.membase + OFFSET_DLL),v)
#define UART_PUT_IER(uart,v) bfin_write16(((uart)->port.membase + OFFSET_IER),v)
+#define UART_SET_IER(uart,v) UART_PUT_IER(uart, UART_GET_IER(uart) | (v))
+#define UART_CLEAR_IER(uart,v) UART_PUT_IER(uart, UART_GET_IER(uart) & ~(v))
#define UART_PUT_DLH(uart,v) bfin_write16(((uart)->port.membase + OFFSET_DLH),v)
#define UART_PUT_LCR(uart,v) bfin_write16(((uart)->port.membase + OFFSET_LCR),v)
#define UART_PUT_GCTL(uart,v) bfin_write16(((uart)->port.membase + OFFSET_GCTL),v)
+#define UART_SET_DLAB(uart) do { UART_PUT_LCR(uart, UART_GET_LCR(uart) | DLAB); SSYNC(); } while (0)
+#define UART_CLEAR_DLAB(uart) do { UART_PUT_LCR(uart, UART_GET_LCR(uart) & ~DLAB); SSYNC(); } while (0)
+
#if defined(CONFIG_BFIN_UART0_CTSRTS) || defined(CONFIG_BFIN_UART1_CTSRTS)
# define CONFIG_SERIAL_BFIN_CTSRTS
/*
* Interrupt source definitions
- Event Source Core Event Name
-Core Emulation **
- Events (highest priority) EMU 0
- Reset RST 1
- NMI NMI 2
- Exception EVX 3
- Reserved -- 4
- Hardware Error IVHW 5
- Core Timer IVTMR 6 *
-
-.....
-
- Software Interrupt 1 IVG14 31
- Software Interrupt 2 --
- (lowest priority) IVG15 32 *
+ * Event Source Core Event Name
+ * Core Emulation **
+ * Events (highest priority) EMU 0
+ * Reset RST 1
+ * NMI NMI 2
+ * Exception EVX 3
+ * Reserved -- 4
+ * Hardware Error IVHW 5
+ * Core Timer IVTMR 6
+ * .....
+ *
+ * Softirq IVG14
+ * System Call --
+ * (lowest priority) IVG15
*/
-#define SYS_IRQS 41
+#define SYS_IRQS 39
#define NR_PERI_INTS 32
/* The ABSTRACT IRQ definitions */
#define IRQ_PORTG_INTB 35 /* PF Port G (PF15:0) Interrupt B */
#define IRQ_MEM_DMA0 36 /*(Memory DMA Stream 0) */
#define IRQ_MEM_DMA1 37 /*(Memory DMA Stream 1) */
-#define IRQ_PROG_INTB 38 /* PF Ports F (PF15:0) Interrupt B */
+#define IRQ_PROG_INTB 38 /* PF Ports F (PF15:0) Interrupt B */
#define IRQ_WATCH 38 /*Watch Dog Timer */
-#define IRQ_SW_INT1 40 /*Software Int 1 */
-#define IRQ_SW_INT2 41 /*Software Int 2 (reserved for SYSCALL) */
#define IRQ_PPI_ERROR 42 /*PPI Error Interrupt */
#define IRQ_CAN_ERROR 43 /*CAN Error Interrupt */
#define UART_PUT_GCTL(uart,v) bfin_write16(((uart)->port.membase + OFFSET_GCTL),v)
#define UART_PUT_MCR(uart,v) bfin_write16(((uart)->port.membase + OFFSET_MCR),v)
+#define UART_SET_DLAB(uart) /* MMRs not muxed on BF54x */
+#define UART_CLEAR_DLAB(uart) /* MMRs not muxed on BF54x */
+
#if defined(CONFIG_BFIN_UART0_CTSRTS) || defined(CONFIG_BFIN_UART1_CTSRTS)
# define CONFIG_SERIAL_BFIN_CTSRTS
#define KPADWE 0x1000 /* Keypad Wake-Up Enable */
#define ROTWE 0x2000 /* Rotary Wake-Up Enable */
+#define FREQ_333 0x0001 /* Switching Frequency Is 333 kHz */
+#define FREQ_667 0x0002 /* Switching Frequency Is 667 kHz */
+#define FREQ_1000 0x0003 /* Switching Frequency Is 1 MHz */
+
+#define GAIN_5 0x0000 /* GAIN = 5*/
+#define GAIN_10 0x0004 /* GAIN = 1*/
+#define GAIN_20 0x0008 /* GAIN = 2*/
+#define GAIN_50 0x000C /* GAIN = 5*/
+
+#define VLEV_085 0x0060 /* VLEV = 0.85 V (-5% - +10% Accuracy) */
+#define VLEV_090 0x0070 /* VLEV = 0.90 V (-5% - +10% Accuracy) */
+#define VLEV_095 0x0080 /* VLEV = 0.95 V (-5% - +10% Accuracy) */
+#define VLEV_100 0x0090 /* VLEV = 1.00 V (-5% - +10% Accuracy) */
+#define VLEV_105 0x00A0 /* VLEV = 1.05 V (-5% - +10% Accuracy) */
+#define VLEV_110 0x00B0 /* VLEV = 1.10 V (-5% - +10% Accuracy) */
+#define VLEV_115 0x00C0 /* VLEV = 1.15 V (-5% - +10% Accuracy) */
+#define VLEV_120 0x00D0 /* VLEV = 1.20 V (-5% - +10% Accuracy) */
+#define VLEV_125 0x00E0 /* VLEV = 1.25 V (-5% - +10% Accuracy) */
+#define VLEV_130 0x00F0 /* VLEV = 1.30 V (-5% - +10% Accuracy) */
+
/* Bit masks for NFC_CTL */
#define WR_DLY 0xf /* Write Strobe Delay */
#define UART_PUT_CHAR(uart,v) bfin_write16(((uart)->port.membase + OFFSET_THR),v)
#define UART_PUT_DLL(uart,v) bfin_write16(((uart)->port.membase + OFFSET_DLL),v)
#define UART_PUT_IER(uart,v) bfin_write16(((uart)->port.membase + OFFSET_IER),v)
+#define UART_SET_IER(uart,v) UART_PUT_IER(uart, UART_GET_IER(uart) | (v))
+#define UART_CLEAR_IER(uart,v) UART_PUT_IER(uart, UART_GET_IER(uart) & ~(v))
#define UART_PUT_DLH(uart,v) bfin_write16(((uart)->port.membase + OFFSET_DLH),v)
#define UART_PUT_LCR(uart,v) bfin_write16(((uart)->port.membase + OFFSET_LCR),v)
#define UART_PUT_GCTL(uart,v) bfin_write16(((uart)->port.membase + OFFSET_GCTL),v)
+#define UART_SET_DLAB(uart) do { UART_PUT_LCR(uart, UART_GET_LCR(uart) | DLAB); SSYNC(); } while (0)
+#define UART_CLEAR_DLAB(uart) do { UART_PUT_LCR(uart, UART_GET_LCR(uart) & ~DLAB); SSYNC(); } while (0)
+
#ifdef CONFIG_BFIN_UART0_CTSRTS
# define CONFIG_SERIAL_BFIN_CTSRTS
# ifndef CONFIG_UART0_CTS_PIN
#define CHIPID_FAMILY 0x0FFFF000
#define CHIPID_MANUFACTURE 0x00000FFE
+/* VR_CTL Masks */
+#define FREQ 0x0003 /* Switching Oscillator Frequency For Regulator */
+#define HIBERNATE 0x0000 /* Powerdown/Bypass On-Board Regulation */
+#define FREQ_333 0x0001 /* Switching Frequency Is 333 kHz */
+#define FREQ_667 0x0002 /* Switching Frequency Is 667 kHz */
+#define FREQ_1000 0x0003 /* Switching Frequency Is 1 MHz */
+
+#define GAIN 0x000C /* Voltage Level Gain */
+#define GAIN_5 0x0000 /* GAIN = 5*/
+#define GAIN_10 0x0004 /* GAIN = 1*/
+#define GAIN_20 0x0008 /* GAIN = 2*/
+#define GAIN_50 0x000C /* GAIN = 5*/
+
+#define VLEV 0x00F0 /* Internal Voltage Level */
+#define VLEV_085 0x0060 /* VLEV = 0.85 V (-5% - +10% Accuracy) */
+#define VLEV_090 0x0070 /* VLEV = 0.90 V (-5% - +10% Accuracy) */
+#define VLEV_095 0x0080 /* VLEV = 0.95 V (-5% - +10% Accuracy) */
+#define VLEV_100 0x0090 /* VLEV = 1.00 V (-5% - +10% Accuracy) */
+#define VLEV_105 0x00A0 /* VLEV = 1.05 V (-5% - +10% Accuracy) */
+#define VLEV_110 0x00B0 /* VLEV = 1.10 V (-5% - +10% Accuracy) */
+#define VLEV_115 0x00C0 /* VLEV = 1.15 V (-5% - +10% Accuracy) */
+#define VLEV_120 0x00D0 /* VLEV = 1.20 V (-5% - +10% Accuracy) */
+#define VLEV_125 0x00E0 /* VLEV = 1.25 V (-5% - +10% Accuracy) */
+#define VLEV_130 0x00F0 /* VLEV = 1.30 V (-5% - +10% Accuracy) */
+
+#define WAKE 0x0100 /* Enable RTC/Reset Wakeup From Hibernate */
+#define SCKELOW 0x8000 /* Do Not Drive SCKE High During Reset After Hibernate */
+
/* PLL_DIV Masks */
#define SCLK_DIV(x) (x) /* SCLK = VCO / x */
Supplemental interrupt 0 IVG7 69
supplemental interrupt 1 IVG7 70
- Software Interrupt 1 IVG14 71
- Software Interrupt 2 IVG15 72 *
- (lowest priority)
+ Softirq IVG14
+ System Call --
+ (lowest priority) IVG15
+
**********************************************************************/
-#define SYS_IRQS 72
+#define SYS_IRQS 71
#define NR_PERI_INTS 64
/*
#define IRQ_RESERVED_2 (IVG_BASE + 61) /* Reserved interrupt */
#define IRQ_SUPPLE_0 (IVG_BASE + 62) /* Supplemental interrupt 0 */
#define IRQ_SUPPLE_1 (IVG_BASE + 63) /* supplemental interrupt 1 */
-#define IRQ_SW_INT1 71 /* Software Interrupt 1 */
-#define IRQ_SW_INT2 72 /* Software Interrupt 2 */
- /* reserved for SYSCALL */
+
#define IRQ_PF0 73
#define IRQ_PF1 74
#define IRQ_PF2 75
* 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
*/
+/*
+ * NOTE! The single-stepping code assumes that all interrupt handlers
+ * start by saving SYSCFG on the stack with their first instruction.
+ */
+
/*
* Code to save processor context.
* We even save the register which are preserved by a function call
#ifndef CONFIG_CPU_FREQ
#define TIME_SCALE 1
+#define __bfin_cycles_off (0)
+#define __bfin_cycles_mod (0)
#else
/*
* Blackfin CPU frequency scaling supports max Core Clock 1, 1/2 and 1/4 .
* adjust the Core Timer Presale Register. This way we don't lose time.
*/
#define TIME_SCALE 4
+extern unsigned long long __bfin_cycles_off;
+extern unsigned int __bfin_cycles_mod;
#endif
#endif
extern void identify_cpu(struct mn10300_cpuinfo *);
extern void print_cpu_info(struct mn10300_cpuinfo *);
extern void dodgy_tsc(void);
-#define cpu_relax() do {} while (0)
+#define cpu_relax() barrier()
/*
* User space process size: 1.75GB (default).
* 0 1 2 3 4 ... 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
* - - - - - - U0 U1 U2 U3 W I M G E - UX UW UR SX SW SR
*
+ * Newer 440 cores (440x6 as used on AMCC 460EX/460GT) have additional
+ * TLB2 storage attibute fields. Those are:
+ *
+ * TLB2:
+ * 0...10 11 12 13 14 15 16...31
+ * no change WL1 IL1I IL1D IL2I IL2D no change
+ *
* There are some constrains and options, to decide mapping software bits
* into TLB entry.
*
/* Flag indicating progress during context switch. */
#define SPU_CONTEXT_SWITCH_PENDING 0UL
+#define SPU_CONTEXT_FAULT_PENDING 1UL
struct spu_context;
struct spu_runqueue;
unsigned int irqs[3];
u32 node;
u64 flags;
- u64 dar;
- u64 dsisr;
u64 class_0_pending;
+ u64 class_0_dar;
+ u64 class_0_dsisr;
+ u64 class_1_dar;
+ u64 class_1_dsisr;
size_t ls_size;
unsigned int slb_replace;
struct mm_struct *mm;
void (* wbox_callback)(struct spu *spu);
void (* ibox_callback)(struct spu *spu);
- void (* stop_callback)(struct spu *spu);
+ void (* stop_callback)(struct spu *spu, int irq);
void (* mfc_callback)(struct spu *spu);
char irq_c0[8];
u64 spu_chnldata_RW[32];
u32 spu_mailbox_data[4];
u32 pu_mailbox_data[1];
- u64 dar, dsisr, class_0_pending;
+ u64 class_0_dar, class_0_dsisr, class_0_pending;
+ u64 class_1_dar, class_1_dsisr;
unsigned long suspend_time;
spinlock_t register_lock;
};
struct kvm_vcpu_stat {
u32 exit_userspace;
+ u32 exit_null;
u32 exit_external_request;
u32 exit_external_interrupt;
u32 exit_stop_request;
return skey;
}
+#ifdef CONFIG_PAGE_STATES
+
+struct page;
+void arch_free_page(struct page *page, int order);
+void arch_alloc_page(struct page *page, int order);
+
+#define HAVE_ARCH_FREE_PAGE
+#define HAVE_ARCH_ALLOC_PAGE
+
+#endif
+
#endif /* !__ASSEMBLY__ */
/* to align the pointer to the (next) page boundary */
extern void user_enable_single_step(struct task_struct *);
extern void user_disable_single_step(struct task_struct *);
+#define __ARCH_WANT_COMPAT_SYS_PTRACE
+
#define user_mode(regs) (((regs)->psw.mask & PSW_MASK_PSTATE) != 0)
#define instruction_pointer(regs) ((regs)->psw.addr & PSW_ADDR_INSN)
#define regs_return_value(regs)((regs)->gprs[2])
#define pfault_fini() do { } while (0)
#endif /* CONFIG_PFAULT */
+#ifdef CONFIG_PAGE_STATES
+extern void cmma_init(void);
+#else
+static inline void cmma_init(void) { }
+#endif
+
#define finish_arch_switch(prev) do { \
set_fs(current->thread.mm_segment); \
account_vtime(prev); \
#if defined(CONFIG_CPU_SUBTYPE_SH7720) || \
- defined(CONFIG_CPU_SUBTYPE_SH7721) || \
- defined(CONFIG_CPU_SUBTYPE_SH7709)
+ defined(CONFIG_CPU_SUBTYPE_SH7721)
#define SH_DMAC_BASE 0xa4010020
+#else
+#define SH_DMAC_BASE 0xa4000020
+#endif
+#if defined(CONFIG_CPU_SUBTYPE_SH7720) || defined(CONFIG_CPU_SUBTYPE_SH7709)
#define DMTE0_IRQ 48
#define DMTE1_IRQ 49
#define DMTE2_IRQ 50
#define DMTE3_IRQ 51
#define DMTE4_IRQ 76
#define DMTE5_IRQ 77
-
-#else
-#define SH_DMAC_BASE 0xa4000020
#endif
/* Definitions for the SuperH DMAC */
struct intc_sense_reg *sense_regs;
unsigned int nr_sense_regs;
char *name;
+#ifdef CONFIG_CPU_SH3
+ struct intc_mask_reg *ack_regs;
+ unsigned int nr_ack_regs;
+#endif
};
#define _INTC_ARRAY(a) a, sizeof(a)/sizeof(*a)
chipname, \
}
+#ifdef CONFIG_CPU_SH3
+#define DECLARE_INTC_DESC_ACK(symbol, chipname, vectors, groups, \
+ mask_regs, prio_regs, sense_regs, ack_regs) \
+struct intc_desc symbol __initdata = { \
+ _INTC_ARRAY(vectors), _INTC_ARRAY(groups), \
+ _INTC_ARRAY(mask_regs), _INTC_ARRAY(prio_regs), \
+ _INTC_ARRAY(sense_regs), \
+ chipname, \
+ _INTC_ARRAY(ack_regs), \
+}
+#endif
+
void __init register_intc_controller(struct intc_desc *desc);
int intc_set_priority(unsigned int irq, unsigned int prio);
void __init plat_irq_setup(void);
+#ifdef CONFIG_CPU_SH3
+void __init plat_irq_setup_sh3(void);
+#endif
enum { IRQ_MODE_IRQ, IRQ_MODE_IRQ7654, IRQ_MODE_IRQ3210,
IRQ_MODE_IRL7654_MASK, IRQ_MODE_IRL3210_MASK,
unsigned long long poke_real_address_q(unsigned long long addr,
unsigned long long val);
-/* arch/sh/mm/ioremap_64.c */
-unsigned long onchip_remap(unsigned long addr, unsigned long size,
- const char *name);
-extern void onchip_unmap(unsigned long vaddr);
-
#if !defined(CONFIG_MMU)
#define virt_to_phys(address) ((unsigned long)(address))
#define phys_to_virt(address) ((void *)(address))
void __iomem *__ioremap(unsigned long offset, unsigned long size,
unsigned long flags);
void __iounmap(void __iomem *addr);
+
+/* arch/sh/mm/ioremap_64.c */
+unsigned long onchip_remap(unsigned long addr, unsigned long size,
+ const char *name);
+extern void onchip_unmap(unsigned long vaddr);
#else
#define __ioremap(offset, size, flags) ((void __iomem *)(offset))
#define __iounmap(addr) do { } while (0)
+#define onchip_remap(addr, size, name) (addr)
+#define onchip_unmap(addr) do { } while (0)
#endif /* CONFIG_MMU */
static inline void __iomem *
+++ /dev/null
-#ifndef __ASM_SH_KEYBOARD_H
-#define __ASM_SH_KEYBOARD_H
-/*
- * $Id: keyboard.h,v 1.1.1.1 2001/10/15 20:45:09 mrbrown Exp $
- */
-
-#include <linux/kd.h>
-#include <asm/machvec.h>
-
-#ifdef CONFIG_SH_MPC1211
-#include <asm/mpc1211/keyboard-mpc1211.h>
-#endif
-#endif
/* ASID is 8-bit value, so it can't be 0x100 */
#define MMU_NO_ASID 0x100
+#ifdef CONFIG_MMU
#define asid_cache(cpu) (cpu_data[cpu].asid_cache)
#define cpu_context(cpu, mm) ((mm)->context.id[cpu])
*/
#define MMU_VPN_MASK 0xfffff000
-#ifdef CONFIG_MMU
#if defined(CONFIG_SUPERH32)
#include "mmu_context_32.h"
#else
#define destroy_context(mm) do { } while (0)
#define set_asid(asid) do { } while (0)
#define get_asid() (0)
+#define cpu_asid(cpu, mm) ({ (void)cpu; 0; })
+#define switch_and_save_asid(asid) (0)
#define set_TTB(pgd) do { } while (0)
#define get_TTB() (0)
#define activate_context(mm,cpu) do { } while (0)
/* arch/sh/kernel/setup.c */
void __init setup_bootmem_allocator(unsigned long start_pfn);
+void __init __add_active_range(unsigned int nid, unsigned long start_pfn,
+ unsigned long end_pfn);
#endif /* __KERNEL__ */
#endif /* __ASM_SH_MMZONE_H */
+++ /dev/null
-/* $Id: dma.h,v 1.7 1992/12/14 00:29:34 root Exp root $
- * linux/include/asm/dma.h: Defines for using and allocating dma channels.
- * Written by Hennus Bergman, 1992.
- * High DMA channel support & info by Hannu Savolainen
- * and John Boyd, Nov. 1992.
- */
-
-#ifndef _ASM_MPC1211_DMA_H
-#define _ASM_MPC1211_DMA_H
-
-#include <linux/spinlock.h> /* And spinlocks */
-#include <asm/io.h> /* need byte IO */
-#include <linux/delay.h>
-
-
-#ifdef HAVE_REALLY_SLOW_DMA_CONTROLLER
-#define dma_outb outb_p
-#else
-#define dma_outb outb
-#endif
-
-#define dma_inb inb
-
-/*
- * NOTES about DMA transfers:
- *
- * controller 1: channels 0-3, byte operations, ports 00-1F
- * controller 2: channels 4-7, word operations, ports C0-DF
- *
- * - ALL registers are 8 bits only, regardless of transfer size
- * - channel 4 is not used - cascades 1 into 2.
- * - channels 0-3 are byte - addresses/counts are for physical bytes
- * - channels 5-7 are word - addresses/counts are for physical words
- * - transfers must not cross physical 64K (0-3) or 128K (5-7) boundaries
- * - transfer count loaded to registers is 1 less than actual count
- * - controller 2 offsets are all even (2x offsets for controller 1)
- * - page registers for 5-7 don't use data bit 0, represent 128K pages
- * - page registers for 0-3 use bit 0, represent 64K pages
- *
- * DMA transfers are limited to the lower 16MB of _physical_ memory.
- * Note that addresses loaded into registers must be _physical_ addresses,
- * not logical addresses (which may differ if paging is active).
- *
- * Address mapping for channels 0-3:
- *
- * A23 ... A16 A15 ... A8 A7 ... A0 (Physical addresses)
- * | ... | | ... | | ... |
- * | ... | | ... | | ... |
- * | ... | | ... | | ... |
- * P7 ... P0 A7 ... A0 A7 ... A0
- * | Page | Addr MSB | Addr LSB | (DMA registers)
- *
- * Address mapping for channels 5-7:
- *
- * A23 ... A17 A16 A15 ... A9 A8 A7 ... A1 A0 (Physical addresses)
- * | ... | \ \ ... \ \ \ ... \ \
- * | ... | \ \ ... \ \ \ ... \ (not used)
- * | ... | \ \ ... \ \ \ ... \
- * P7 ... P1 (0) A7 A6 ... A0 A7 A6 ... A0
- * | Page | Addr MSB | Addr LSB | (DMA registers)
- *
- * Again, channels 5-7 transfer _physical_ words (16 bits), so addresses
- * and counts _must_ be word-aligned (the lowest address bit is _ignored_ at
- * the hardware level, so odd-byte transfers aren't possible).
- *
- * Transfer count (_not # bytes_) is limited to 64K, represented as actual
- * count - 1 : 64K => 0xFFFF, 1 => 0x0000. Thus, count is always 1 or more,
- * and up to 128K bytes may be transferred on channels 5-7 in one operation.
- *
- */
-
-#define MAX_DMA_CHANNELS 8
-
-/* The maximum address that we can perform a DMA transfer to on this platform */
-#define MAX_DMA_ADDRESS (PAGE_OFFSET+0x10000000)
-
-/* 8237 DMA controllers */
-#define IO_DMA1_BASE 0x00 /* 8 bit slave DMA, channels 0..3 */
-#define IO_DMA2_BASE 0xC0 /* 16 bit master DMA, ch 4(=slave input)..7 */
-
-/* DMA controller registers */
-#define DMA1_CMD_REG 0x08 /* command register (w) */
-#define DMA1_STAT_REG 0x08 /* status register (r) */
-#define DMA1_REQ_REG 0x09 /* request register (w) */
-#define DMA1_MASK_REG 0x0A /* single-channel mask (w) */
-#define DMA1_MODE_REG 0x0B /* mode register (w) */
-#define DMA1_CLEAR_FF_REG 0x0C /* clear pointer flip-flop (w) */
-#define DMA1_TEMP_REG 0x0D /* Temporary Register (r) */
-#define DMA1_RESET_REG 0x0D /* Master Clear (w) */
-#define DMA1_CLR_MASK_REG 0x0E /* Clear Mask */
-#define DMA1_MASK_ALL_REG 0x0F /* all-channels mask (w) */
-
-#define DMA2_CMD_REG 0xD0 /* command register (w) */
-#define DMA2_STAT_REG 0xD0 /* status register (r) */
-#define DMA2_REQ_REG 0xD2 /* request register (w) */
-#define DMA2_MASK_REG 0xD4 /* single-channel mask (w) */
-#define DMA2_MODE_REG 0xD6 /* mode register (w) */
-#define DMA2_CLEAR_FF_REG 0xD8 /* clear pointer flip-flop (w) */
-#define DMA2_TEMP_REG 0xDA /* Temporary Register (r) */
-#define DMA2_RESET_REG 0xDA /* Master Clear (w) */
-#define DMA2_CLR_MASK_REG 0xDC /* Clear Mask */
-#define DMA2_MASK_ALL_REG 0xDE /* all-channels mask (w) */
-
-#define DMA_ADDR_0 0x00 /* DMA address registers */
-#define DMA_ADDR_1 0x02
-#define DMA_ADDR_2 0x04
-#define DMA_ADDR_3 0x06
-#define DMA_ADDR_4 0xC0
-#define DMA_ADDR_5 0xC4
-#define DMA_ADDR_6 0xC8
-#define DMA_ADDR_7 0xCC
-
-#define DMA_CNT_0 0x01 /* DMA count registers */
-#define DMA_CNT_1 0x03
-#define DMA_CNT_2 0x05
-#define DMA_CNT_3 0x07
-#define DMA_CNT_4 0xC2
-#define DMA_CNT_5 0xC6
-#define DMA_CNT_6 0xCA
-#define DMA_CNT_7 0xCE
-
-#define DMA_PAGE_0 0x87 /* DMA page registers */
-#define DMA_PAGE_1 0x83
-#define DMA_PAGE_2 0x81
-#define DMA_PAGE_3 0x82
-#define DMA_PAGE_5 0x8B
-#define DMA_PAGE_6 0x89
-#define DMA_PAGE_7 0x8A
-
-#define DMA_MODE_READ 0x44 /* I/O to memory, no autoinit, increment, single mode */
-#define DMA_MODE_WRITE 0x48 /* memory to I/O, no autoinit, increment, single mode */
-#define DMA_MODE_CASCADE 0xC0 /* pass thru DREQ->HRQ, DACK<-HLDA only */
-
-#define DMA_AUTOINIT 0x10
-
-
-extern spinlock_t dma_spin_lock;
-
-static __inline__ unsigned long claim_dma_lock(void)
-{
- unsigned long flags;
- spin_lock_irqsave(&dma_spin_lock, flags);
- return flags;
-}
-
-static __inline__ void release_dma_lock(unsigned long flags)
-{
- spin_unlock_irqrestore(&dma_spin_lock, flags);
-}
-
-/* enable/disable a specific DMA channel */
-static __inline__ void enable_dma(unsigned int dmanr)
-{
- if (dmanr<=3)
- dma_outb(dmanr, DMA1_MASK_REG);
- else
- dma_outb(dmanr & 3, DMA2_MASK_REG);
-}
-
-static __inline__ void disable_dma(unsigned int dmanr)
-{
- if (dmanr<=3)
- dma_outb(dmanr | 4, DMA1_MASK_REG);
- else
- dma_outb((dmanr & 3) | 4, DMA2_MASK_REG);
-}
-
-/* Clear the 'DMA Pointer Flip Flop'.
- * Write 0 for LSB/MSB, 1 for MSB/LSB access.
- * Use this once to initialize the FF to a known state.
- * After that, keep track of it. :-)
- * --- In order to do that, the DMA routines below should ---
- * --- only be used while holding the DMA lock ! ---
- */
-static __inline__ void clear_dma_ff(unsigned int dmanr)
-{
- if (dmanr<=3)
- dma_outb(0, DMA1_CLEAR_FF_REG);
- else
- dma_outb(0, DMA2_CLEAR_FF_REG);
-}
-
-/* set mode (above) for a specific DMA channel */
-static __inline__ void set_dma_mode(unsigned int dmanr, char mode)
-{
- if (dmanr<=3)
- dma_outb(mode | dmanr, DMA1_MODE_REG);
- else
- dma_outb(mode | (dmanr&3), DMA2_MODE_REG);
-}
-
-/* Set only the page register bits of the transfer address.
- * This is used for successive transfers when we know the contents of
- * the lower 16 bits of the DMA current address register, but a 64k boundary
- * may have been crossed.
- */
-static __inline__ void set_dma_page(unsigned int dmanr, unsigned int pagenr)
-{
- switch(dmanr) {
- case 0:
- dma_outb( pagenr & 0xff, DMA_PAGE_0);
- dma_outb((pagenr >> 8) & 0xff, DMA_PAGE_0 + 0x400);
- break;
- case 1:
- dma_outb( pagenr & 0xff, DMA_PAGE_1);
- dma_outb((pagenr >> 8) & 0xff, DMA_PAGE_1 + 0x400);
- break;
- case 2:
- dma_outb( pagenr & 0xff, DMA_PAGE_2);
- dma_outb((pagenr >> 8) & 0xff, DMA_PAGE_2 + 0x400);
- break;
- case 3:
- dma_outb( pagenr & 0xff, DMA_PAGE_3);
- dma_outb((pagenr >> 8) & 0xff, DMA_PAGE_3 + 0x400);
- break;
- case 5:
- dma_outb( pagenr & 0xfe, DMA_PAGE_5);
- dma_outb((pagenr >> 8) & 0xff, DMA_PAGE_5 + 0x400);
- break;
- case 6:
- dma_outb( pagenr & 0xfe, DMA_PAGE_6);
- dma_outb((pagenr >> 8) & 0xff, DMA_PAGE_6 + 0x400);
- break;
- case 7:
- dma_outb( pagenr & 0xfe, DMA_PAGE_7);
- dma_outb((pagenr >> 8) & 0xff, DMA_PAGE_7 + 0x400);
- break;
- }
-}
-
-
-/* Set transfer address & page bits for specific DMA channel.
- * Assumes dma flipflop is clear.
- */
-static __inline__ void set_dma_addr(unsigned int dmanr, unsigned int a)
-{
- set_dma_page(dmanr, a>>16);
- if (dmanr <= 3) {
- dma_outb( a & 0xff, ((dmanr&3)<<1) + IO_DMA1_BASE );
- dma_outb( (a>>8) & 0xff, ((dmanr&3)<<1) + IO_DMA1_BASE );
- } else {
- dma_outb( (a>>1) & 0xff, ((dmanr&3)<<2) + IO_DMA2_BASE );
- dma_outb( (a>>9) & 0xff, ((dmanr&3)<<2) + IO_DMA2_BASE );
- }
-}
-
-
-/* Set transfer size (max 64k for DMA1..3, 128k for DMA5..7) for
- * a specific DMA channel.
- * You must ensure the parameters are valid.
- * NOTE: from a manual: "the number of transfers is one more
- * than the initial word count"! This is taken into account.
- * Assumes dma flip-flop is clear.
- * NOTE 2: "count" represents _bytes_ and must be even for channels 5-7.
- */
-static __inline__ void set_dma_count(unsigned int dmanr, unsigned int count)
-{
- count--;
- if (dmanr <= 3) {
- dma_outb( count & 0xff, ((dmanr&3)<<1) + 1 + IO_DMA1_BASE );
- dma_outb( (count>>8) & 0xff, ((dmanr&3)<<1) + 1 + IO_DMA1_BASE );
- } else {
- dma_outb( (count>>1) & 0xff, ((dmanr&3)<<2) + 2 + IO_DMA2_BASE );
- dma_outb( (count>>9) & 0xff, ((dmanr&3)<<2) + 2 + IO_DMA2_BASE );
- }
-}
-
-
-/* Get DMA residue count. After a DMA transfer, this
- * should return zero. Reading this while a DMA transfer is
- * still in progress will return unpredictable results.
- * If called before the channel has been used, it may return 1.
- * Otherwise, it returns the number of _bytes_ left to transfer.
- *
- * Assumes DMA flip-flop is clear.
- */
-static __inline__ int get_dma_residue(unsigned int dmanr)
-{
- unsigned int io_port = (dmanr<=3)? ((dmanr&3)<<1) + 1 + IO_DMA1_BASE
- : ((dmanr&3)<<2) + 2 + IO_DMA2_BASE;
-
- /* using short to get 16-bit wrap around */
- unsigned short count;
-
- count = 1 + dma_inb(io_port);
- count += dma_inb(io_port) << 8;
- return (dmanr<=3)? count : (count<<1);
-}
-
-
-/* These are in kernel/dma.c: */
-extern int request_dma(unsigned int dmanr, const char * device_id); /* reserve a DMA channel */
-extern void free_dma(unsigned int dmanr); /* release it again */
-
-/* From PCI */
-
-#ifdef CONFIG_PCI
-extern int isa_dma_bridge_buggy;
-#else
-#define isa_dma_bridge_buggy (0)
-#endif
-
-#endif /* _ASM_MPC1211_DMA_H */
+++ /dev/null
-/*
- * include/asm-sh/mpc1211/io.h
- *
- * Copyright 2001 Saito.K & Jeanne
- *
- * IO functions for an Interface MPC-1211
- */
-
-#ifndef _ASM_SH_IO_MPC1211_H
-#define _ASM_SH_IO_MPC1211_H
-
-#include <linux/time.h>
-
-extern int mpc1211_irq_demux(int irq);
-
-extern void init_mpc1211_IRQ(void);
-extern void heartbeat_mpc1211(void);
-
-extern void mpc1211_rtc_gettimeofday(struct timeval *tv);
-extern int mpc1211_rtc_settimeofday(const struct timeval *tv);
-
-#endif /* _ASM_SH_IO_MPC1211_H */
+++ /dev/null
-/*
- * MPC1211 specific keybord definitions
- * Taken from the old asm-i386/keybord.h for PC/AT-style definitions
- * created 3 Nov 1996 by Geert Uytterhoeven.
- */
-
-#ifdef __KERNEL__
-
-#include <linux/kernel.h>
-#include <linux/ioport.h>
-#include <linux/kd.h>
-#include <linux/pm.h>
-#include <asm/io.h>
-
-#define KEYBOARD_IRQ 1
-#define DISABLE_KBD_DURING_INTERRUPTS 0
-
-extern int pckbd_setkeycode(unsigned int scancode, unsigned int keycode);
-extern int pckbd_getkeycode(unsigned int scancode);
-extern int pckbd_translate(unsigned char scancode, unsigned char *keycode,
- char raw_mode);
-extern char pckbd_unexpected_up(unsigned char keycode);
-extern void pckbd_leds(unsigned char leds);
-extern void pckbd_init_hw(void);
-extern int pckbd_pm_resume(struct pm_dev *, pm_request_t, void *);
-extern pm_callback pm_kbd_request_override;
-
-#define kbd_setkeycode pckbd_setkeycode
-#define kbd_getkeycode pckbd_getkeycode
-#define kbd_translate pckbd_translate
-#define kbd_unexpected_up pckbd_unexpected_up
-#define kbd_leds pckbd_leds
-#define kbd_init_hw pckbd_init_hw
-
-/* resource allocation */
-#define kbd_request_region()
-#define kbd_request_irq(handler) request_irq(KEYBOARD_IRQ, handler, 0, \
- "keyboard", NULL)
-
-/* How to access the keyboard macros on this platform. */
-#define kbd_read_input() inb(KBD_DATA_REG)
-#define kbd_read_status() inb(KBD_STATUS_REG)
-#define kbd_write_output(val) outb(val, KBD_DATA_REG)
-#define kbd_write_command(val) outb(val, KBD_CNTL_REG)
-
-/* Some stoneage hardware needs delays after some operations. */
-#define kbd_pause() do { } while(0)
-
-/*
- * Machine specific bits for the PS/2 driver
- */
-
-#define AUX_IRQ 12
-
-#define aux_request_irq(hand, dev_id) \
- request_irq(AUX_IRQ, hand, IRQF_SHARED, "PS2 Mouse", dev_id)
-
-#define aux_free_irq(dev_id) free_irq(AUX_IRQ, dev_id)
-
-#endif /* __KERNEL__ */
+++ /dev/null
-#ifndef __ASM_SH_M1543C_H
-#define __ASM_SH_M1543C_H
-
-/*
- * linux/include/asm-sh/m1543c.h
- * Copyright (C) 2001 Nobuhiro Sakawa
- * M1543C:PCI-ISA Bus Bridge with Super IO Chip support
- *
- * from
- *
- * linux/include/asm-sh/smc37c93x.h
- *
- * Copyright (C) 2000 Kazumoto Kojima
- *
- * SMSC 37C93x Super IO Chip support
- */
-
-/* Default base I/O address */
-#define FDC_PRIMARY_BASE 0x3f0
-#define IDE1_PRIMARY_BASE 0x1f0
-#define IDE1_SECONDARY_BASE 0x170
-#define PARPORT_PRIMARY_BASE 0x378
-#define COM1_PRIMARY_BASE 0x2f8
-#define COM2_PRIMARY_BASE 0x3f8
-#define COM3_PRIMARY_BASE 0x3e8
-#define RTC_PRIMARY_BASE 0x070
-#define KBC_PRIMARY_BASE 0x060
-#define AUXIO_PRIMARY_BASE 0x000 /* XXX */
-#define I8259_M_CR 0x20
-#define I8259_M_MR 0x21
-#define I8259_S_CR 0xa0
-#define I8259_S_MR 0xa1
-
-/* Logical device number */
-#define LDN_FDC 0
-#define LDN_IDE1 1
-#define LDN_IDE2 2
-#define LDN_PARPORT 3
-#define LDN_COM1 4
-#define LDN_COM2 5
-#define LDN_COM3 11
-#define LDN_RTC 6
-#define LDN_KBC 7
-
-/* Configuration port and key */
-#define CONFIG_PORT 0x3f0
-#define INDEX_PORT CONFIG_PORT
-#define DATA_PORT 0x3f1
-#define CONFIG_ENTER1 0x51
-#define CONFIG_ENTER2 0x23
-#define CONFIG_EXIT 0xbb
-
-/* Configuration index */
-#define CURRENT_LDN_INDEX 0x07
-#define POWER_CONTROL_INDEX 0x22
-#define ACTIVATE_INDEX 0x30
-#define IO_BASE_HI_INDEX 0x60
-#define IO_BASE_LO_INDEX 0x61
-#define IRQ_SELECT_INDEX 0x70
-#define PS2_IRQ_INDEX 0x72
-#define DMA_SELECT_INDEX 0x74
-
-/* UART stuff. Only for debugging. */
-/* UART Register */
-
-#define UART_RBR 0x0 /* Receiver Buffer Register (Read Only) */
-#define UART_THR 0x0 /* Transmitter Holding Register (Write Only) */
-#define UART_IER 0x2 /* Interrupt Enable Register */
-#define UART_IIR 0x4 /* Interrupt Ident Register (Read Only) */
-#define UART_FCR 0x4 /* FIFO Control Register (Write Only) */
-#define UART_LCR 0x6 /* Line Control Register */
-#define UART_MCR 0x8 /* MODEM Control Register */
-#define UART_LSR 0xa /* Line Status Register */
-#define UART_MSR 0xc /* MODEM Status Register */
-#define UART_SCR 0xe /* Scratch Register */
-#define UART_DLL 0x0 /* Divisor Latch (LS) */
-#define UART_DLM 0x2 /* Divisor Latch (MS) */
-
-#ifndef __ASSEMBLY__
-typedef struct uart_reg {
- volatile __u16 rbr;
- volatile __u16 ier;
- volatile __u16 iir;
- volatile __u16 lcr;
- volatile __u16 mcr;
- volatile __u16 lsr;
- volatile __u16 msr;
- volatile __u16 scr;
-} uart_reg;
-#endif /* ! __ASSEMBLY__ */
-
-/* Alias for Write Only Register */
-
-#define thr rbr
-#define tcr iir
-
-/* Alias for Divisor Latch Register */
-
-#define dll rbr
-#define dlm ier
-#define fcr iir
-
-/* Interrupt Enable Register */
-
-#define IER_ERDAI 0x0100 /* Enable Received Data Available Interrupt */
-#define IER_ETHREI 0x0200 /* Enable Transmitter Holding Register Empty Interrupt */
-#define IER_ELSI 0x0400 /* Enable Receiver Line Status Interrupt */
-#define IER_EMSI 0x0800 /* Enable MODEM Status Interrupt */
-
-/* Interrupt Ident Register */
-
-#define IIR_IP 0x0100 /* "0" if Interrupt Pending */
-#define IIR_IIB0 0x0200 /* Interrupt ID Bit 0 */
-#define IIR_IIB1 0x0400 /* Interrupt ID Bit 1 */
-#define IIR_IIB2 0x0800 /* Interrupt ID Bit 2 */
-#define IIR_FIFO 0xc000 /* FIFOs enabled */
-
-/* FIFO Control Register */
-
-#define FCR_FEN 0x0100 /* FIFO enable */
-#define FCR_RFRES 0x0200 /* Receiver FIFO reset */
-#define FCR_TFRES 0x0400 /* Transmitter FIFO reset */
-#define FCR_DMA 0x0800 /* DMA mode select */
-#define FCR_RTL 0x4000 /* Receiver triger (LSB) */
-#define FCR_RTM 0x8000 /* Receiver triger (MSB) */
-
-/* Line Control Register */
-
-#define LCR_WLS0 0x0100 /* Word Length Select Bit 0 */
-#define LCR_WLS1 0x0200 /* Word Length Select Bit 1 */
-#define LCR_STB 0x0400 /* Number of Stop Bits */
-#define LCR_PEN 0x0800 /* Parity Enable */
-#define LCR_EPS 0x1000 /* Even Parity Select */
-#define LCR_SP 0x2000 /* Stick Parity */
-#define LCR_SB 0x4000 /* Set Break */
-#define LCR_DLAB 0x8000 /* Divisor Latch Access Bit */
-
-/* MODEM Control Register */
-
-#define MCR_DTR 0x0100 /* Data Terminal Ready */
-#define MCR_RTS 0x0200 /* Request to Send */
-#define MCR_OUT1 0x0400 /* Out 1 */
-#define MCR_IRQEN 0x0800 /* IRQ Enable */
-#define MCR_LOOP 0x1000 /* Loop */
-
-/* Line Status Register */
-
-#define LSR_DR 0x0100 /* Data Ready */
-#define LSR_OE 0x0200 /* Overrun Error */
-#define LSR_PE 0x0400 /* Parity Error */
-#define LSR_FE 0x0800 /* Framing Error */
-#define LSR_BI 0x1000 /* Break Interrupt */
-#define LSR_THRE 0x2000 /* Transmitter Holding Register Empty */
-#define LSR_TEMT 0x4000 /* Transmitter Empty */
-#define LSR_FIFOE 0x8000 /* Receiver FIFO error */
-
-/* MODEM Status Register */
-
-#define MSR_DCTS 0x0100 /* Delta Clear to Send */
-#define MSR_DDSR 0x0200 /* Delta Data Set Ready */
-#define MSR_TERI 0x0400 /* Trailing Edge Ring Indicator */
-#define MSR_DDCD 0x0800 /* Delta Data Carrier Detect */
-#define MSR_CTS 0x1000 /* Clear to Send */
-#define MSR_DSR 0x2000 /* Data Set Ready */
-#define MSR_RI 0x4000 /* Ring Indicator */
-#define MSR_DCD 0x8000 /* Data Carrier Detect */
-
-/* Baud Rate Divisor */
-
-#define UART_CLK (1843200) /* 1.8432 MHz */
-#define UART_BAUD(x) (UART_CLK / (16 * (x)))
-
-/* RTC register definition */
-#define RTC_SECONDS 0
-#define RTC_SECONDS_ALARM 1
-#define RTC_MINUTES 2
-#define RTC_MINUTES_ALARM 3
-#define RTC_HOURS 4
-#define RTC_HOURS_ALARM 5
-#define RTC_DAY_OF_WEEK 6
-#define RTC_DAY_OF_MONTH 7
-#define RTC_MONTH 8
-#define RTC_YEAR 9
-#define RTC_FREQ_SELECT 10
-# define RTC_UIP 0x80
-# define RTC_DIV_CTL 0x70
-/* This RTC can work under 32.768KHz clock only. */
-# define RTC_OSC_ENABLE 0x20
-# define RTC_OSC_DISABLE 0x00
-#define RTC_CONTROL 11
-# define RTC_SET 0x80
-# define RTC_PIE 0x40
-# define RTC_AIE 0x20
-# define RTC_UIE 0x10
-# define RTC_SQWE 0x08
-# define RTC_DM_BINARY 0x04
-# define RTC_24H 0x02
-# define RTC_DST_EN 0x01
-
-#endif /* __ASM_SH_M1543C_H */
+++ /dev/null
-/*
- * MPC1211 uses PC/AT style RTC definitions.
- */
-#include <asm-x86/mc146818rtc_32.h>
-
-
+++ /dev/null
-#ifndef __ASM_SH_MPC1211_H
-#define __ASM_SH_MPC1211_H
-
-/*
- * linux/include/asm-sh/mpc1211.h
- *
- * Copyright (C) 2001 Saito.K & Jeanne
- *
- * Interface MPC-1211 support
- */
-
-#define PA_PCI_IO (0xa4000000) /* PCI I/O space */
-#define PA_PCI_MEM (0xb0000000) /* PCI MEM space */
-
-#define PCIPAR (0xa4000cf8) /* PCI Config address */
-#define PCIPDR (0xa4000cfc) /* PCI Config data */
-
-#endif /* __ASM_SH_MPC1211_H */
+++ /dev/null
-/*
- * Low-Level PCI Support for MPC-1211
- *
- * (c) 2002 Saito.K & Jeanne
- *
- */
-
-#ifndef _PCI_MPC1211_H_
-#define _PCI_MPC1211_H_
-
-#include <linux/pci.h>
-
-/* set debug level 4=verbose...1=terse */
-//#define DEBUG_PCI 3
-#undef DEBUG_PCI
-
-#ifdef DEBUG_PCI
-#define PCIDBG(n, x...) { if(DEBUG_PCI>=n) printk(x); }
-#else
-#define PCIDBG(n, x...)
-#endif
-
-/* startup values */
-#define PCI_PROBE_BIOS 1
-#define PCI_PROBE_CONF1 2
-#define PCI_PROBE_CONF2 4
-#define PCI_NO_CHECKS 0x400
-#define PCI_ASSIGN_ROMS 0x1000
-#define PCI_BIOS_IRQ_SCAN 0x2000
-
-/* MPC-1211 Specific Values */
-#define PCIPAR (0xa4000cf8) /* PCI Config address */
-#define PCIPDR (0xa4000cfc) /* PCI Config data */
-
-#define PA_PCI_IO (0xa4000000) /* PCI I/O space */
-#define PA_PCI_MEM (0xb0000000) /* PCI MEM space */
-
-#endif /* _PCI_MPC1211_H_ */
#define IRQ_SCIF0 (HL_FPGA_IRQ_BASE + 15)
#define IRQ_SCIF1 (HL_FPGA_IRQ_BASE + 16)
-unsigned char *highlander_init_irq_r7780mp(void);
-unsigned char *highlander_init_irq_r7780rp(void);
-unsigned char *highlander_init_irq_r7785rp(void);
+unsigned char *highlander_plat_irq_setup(void);
#endif /* __ASM_SH_RENESAS_R7780RP */
__asm__ __volatile__ ("putcfg %0, 0, r63\n" : : "r" (slot));
}
+#ifdef CONFIG_MMU
/* arch/sh64/mm/tlb.c */
int sh64_tlb_init(void);
unsigned long long sh64_next_free_dtlb_entry(void);
void sh64_setup_tlb_slot(unsigned long long config_addr, unsigned long eaddr,
unsigned long asid, unsigned long paddr);
void sh64_teardown_tlb_slot(unsigned long long config_addr);
-
+#else
+#define sh64_tlb_init() do { } while (0)
+#define sh64_next_free_dtlb_entry() (0)
+#define sh64_get_wired_dtlb_entry() (0)
+#define sh64_put_wired_dtlb_entry(entry) do { } while (0)
+#define sh64_setup_tlb_slot(conf, virt, asid, phys) do { } while (0)
+#define sh64_teardown_tlb_slot(addr) do { } while (0)
+#endif /* CONFIG_MMU */
#endif /* __ASSEMBLY__ */
#endif /* __ASM_SH_TLB_64_H */
.nr_balance_failed = 0, \
}
+#define cpu_to_node(cpu) ((void)(cpu),0)
+#define parent_node(node) ((void)(node),0)
+
+#define node_to_cpumask(node) ((void)node, cpu_online_map)
+#define node_to_first_cpu(node) ((void)(node),0)
+
+#define pcibus_to_node(bus) ((void)(bus), -1)
+#define pcibus_to_cpumask(bus) (pcibus_to_node(bus) == -1 ? \
+ CPU_MASK_ALL : \
+ node_to_cpumask(pcibus_to_node(bus)) \
+ )
#endif
#include <asm-generic/topology.h>
unsigned long insn, fixup;
};
+#ifdef CONFIG_MMU
#define ARCH_HAS_SEARCH_EXTABLE
+#endif
/* Returns 0 if exception not found and fixup.unit otherwise. */
extern unsigned long search_exception_table(unsigned long addr);
#define PSR_PIL 0x00000f00 /* processor interrupt level */
#define PSR_EF 0x00001000 /* enable floating point */
#define PSR_EC 0x00002000 /* enable co-processor */
+#define PSR_SYSCALL 0x00004000 /* inside of a syscall */
#define PSR_LE 0x00008000 /* SuperSparcII little-endian */
#define PSR_ICC 0x00f00000 /* integer condition codes */
#define PSR_C 0x00100000 /* carry bit */
#define UREG_FP UREG_I6
#define UREG_RETPC UREG_I7
+static inline bool pt_regs_is_syscall(struct pt_regs *regs)
+{
+ return (regs->psr & PSR_SYSCALL);
+}
+
+static inline bool pt_regs_clear_syscall(struct pt_regs *regs)
+{
+ return (regs->psr &= ~PSR_SYSCALL);
+}
+
/* A register window */
struct reg_window {
unsigned long locals[8];
#define SF_XXARG 0x5c
/* Stuff for the ptrace system call */
+#define PTRACE_SPARC_DETACH 11
#define PTRACE_GETREGS 12
#define PTRACE_SETREGS 13
#define PTRACE_GETFPREGS 14
size_t ss_size;
} stack_t;
-struct sparc_deliver_cookie {
- int restart_syscall;
- unsigned long orig_i0;
-};
-
-struct pt_regs;
-extern void ptrace_signal_deliver(struct pt_regs *regs, void *cookie);
+#define ptrace_signal_deliver(regs, cookie) do { } while (0)
#endif /* !(__KERNEL__) */
#define PSR_PIL 0x00000f00 /* processor interrupt level */
#define PSR_EF 0x00001000 /* enable floating point */
#define PSR_EC 0x00002000 /* enable co-processor */
+#define PSR_SYSCALL 0x00004000 /* inside of a syscall */
#define PSR_LE 0x00008000 /* SuperSparcII little-endian */
#define PSR_ICC 0x00f00000 /* integer condition codes */
#define PSR_C 0x00100000 /* carry bit */
PSR_S |
((tstate & TSTATE_ICC) >> 12) |
((tstate & TSTATE_XCC) >> 20) |
+ ((tstate & TSTATE_SYSCALL) ? PSR_SYSCALL : 0) |
PSR_V8PLUS);
}
#define TSTATE_PRIV _AC(0x0000000000000400,UL) /* Privilege. */
#define TSTATE_IE _AC(0x0000000000000200,UL) /* Interrupt Enable. */
#define TSTATE_AG _AC(0x0000000000000100,UL) /* Alternate Globals.*/
+#define TSTATE_SYSCALL _AC(0x0000000000000020,UL) /* in syscall trap */
#define TSTATE_CWP _AC(0x000000000000001f,UL) /* Curr Win-Pointer. */
/* Floating-Point Registers State Register.
return regs->magic & 0x1ff;
}
-static inline int pt_regs_clear_trap_type(struct pt_regs *regs)
+static inline bool pt_regs_is_syscall(struct pt_regs *regs)
{
- return regs->magic &= ~0x1ff;
+ return (regs->tstate & TSTATE_SYSCALL);
}
-static inline bool pt_regs_is_syscall(struct pt_regs *regs)
+static inline bool pt_regs_clear_syscall(struct pt_regs *regs)
{
- int tt = pt_regs_trap_type(regs);
-
- return (tt == 0x110 || tt == 0x111 || tt == 0x16d);
+ return (regs->tstate &= ~TSTATE_SYSCALL);
}
struct pt_regs32 {
#define SF_XXARG 0x5c
/* Stuff for the ptrace system call */
+#define PTRACE_SPARC_DETACH 11
#define PTRACE_GETREGS 12
#define PTRACE_SETREGS 13
#define PTRACE_GETFPREGS 14
void __user *ka_restorer;
};
-struct signal_deliver_cookie {
- int restart_syscall;
- unsigned long orig_i0;
-};
-
-struct pt_regs;
-extern void ptrace_signal_deliver(struct pt_regs *regs, void *cookie);
+#define ptrace_signal_deliver(regs, cookie) do { } while (0)
#endif /* !(__KERNEL__) */
nop;
#define SYSCALL_TRAP(routine, systbl) \
+ rdpr %pil, %g2; \
+ mov TSTATE_SYSCALL, %g3; \
sethi %hi(109f), %g7; \
- ba,pt %xcc, etrap; \
+ ba,pt %xcc, etrap_syscall; \
109: or %g7, %lo(109b), %g7; \
sethi %hi(systbl), %l7; \
ba,pt %xcc, routine; \
- or %l7, %lo(systbl), %l7; \
- nop; nop;
+ or %l7, %lo(systbl), %l7;
#define TRAP_UTRAP(handler,lvl) \
mov handler, %g3; \
#if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 1)
/* Technically wrong, but this avoids compilation errors on some gcc
versions. */
-#define ADDR "=m" (*(volatile long *)addr)
-#define BIT_ADDR "=m" (((volatile int *)addr)[nr >> 5])
+#define ADDR "=m" (*(volatile long *) addr)
#else
#define ADDR "+m" (*(volatile long *) addr)
-#define BIT_ADDR "+m" (((volatile int *)addr)[nr >> 5])
#endif
-#define BASE_ADDR "m" (*(volatile int *)addr)
/**
* set_bit - Atomically set a bit in memory
*/
static inline void clear_bit(int nr, volatile void *addr)
{
- asm volatile(LOCK_PREFIX "btr %1,%2" : BIT_ADDR : "Ir" (nr), BASE_ADDR);
+ asm volatile(LOCK_PREFIX "btr %1,%0" : ADDR : "Ir" (nr));
}
/*
static inline void __clear_bit(int nr, volatile void *addr)
{
- asm volatile("btr %1,%2" : BIT_ADDR : "Ir" (nr), BASE_ADDR);
+ asm volatile("btr %1,%0" : ADDR : "Ir" (nr));
}
/*
*/
static inline void __change_bit(int nr, volatile void *addr)
{
- asm volatile("btc %1,%2" : BIT_ADDR : "Ir" (nr), BASE_ADDR);
+ asm volatile("btc %1,%0" : ADDR : "Ir" (nr));
}
/**
*/
static inline void change_bit(int nr, volatile void *addr)
{
- asm volatile(LOCK_PREFIX "btc %1,%2" : BIT_ADDR : "Ir" (nr), BASE_ADDR);
+ asm volatile(LOCK_PREFIX "btc %1,%0" : ADDR : "Ir" (nr));
}
/**
{
int oldbit;
- asm volatile("bts %2,%3\n\t"
- "sbb %0,%0"
- : "=r" (oldbit), BIT_ADDR : "Ir" (nr), BASE_ADDR);
+ asm("bts %2,%1\n\t"
+ "sbb %0,%0"
+ : "=r" (oldbit), ADDR
+ : "Ir" (nr));
return oldbit;
}
{
int oldbit;
- asm volatile("btr %2,%3\n\t"
+ asm volatile("btr %2,%1\n\t"
"sbb %0,%0"
- : "=r" (oldbit), BIT_ADDR : "Ir" (nr), BASE_ADDR);
+ : "=r" (oldbit), ADDR
+ : "Ir" (nr));
return oldbit;
}
{
int oldbit;
- asm volatile("btc %2,%3\n\t"
+ asm volatile("btc %2,%1\n\t"
"sbb %0,%0"
- : "=r" (oldbit), BIT_ADDR : "Ir" (nr), BASE_ADDR);
+ : "=r" (oldbit), ADDR
+ : "Ir" (nr) : "memory");
return oldbit;
}
{
int oldbit;
- asm volatile("bt %2,%3\n\t"
+ asm volatile("bt %2,%1\n\t"
"sbb %0,%0"
: "=r" (oldbit)
- : "m" (((volatile const int *)addr)[nr >> 5]),
- "Ir" (nr), BASE_ADDR);
+ : "m" (*(unsigned long *)addr), "Ir" (nr));
return oldbit;
}
}
#endif /* __KERNEL__ */
-#undef BASE_ADDR
-#undef BIT_ADDR
#undef ADDR
static inline void set_bit_string(unsigned long *bitmap,
return (is_geode_gx() || is_geode_lx());
}
-/*
- * The VSA has virtual registers that we can query for a signature.
- */
+#ifdef CONFIG_MGEODE_LX
+extern int geode_has_vsa2(void);
+#else
static inline int geode_has_vsa2(void)
{
- outw(VSA_VR_UNLOCK, VSA_VRC_INDEX);
- outw(VSA_VR_SIGNATURE, VSA_VRC_INDEX);
-
- return (inw(VSA_VRC_DATA) == VSA_SIG);
+ return 0;
}
+#endif
/* MFGPTs */
*/
static inline int restore_i387(struct _fpstate __user *buf)
{
- set_used_math();
+ struct task_struct *tsk = current;
+ int err;
+
+ if (!used_math()) {
+ err = init_fpu(tsk);
+ if (err)
+ return err;
+ }
+
if (!(task_thread_info(current)->status & TS_USEDFPU)) {
clts();
task_thread_info(current)->status |= TS_USEDFPU;
#include <linux/types.h>
+#ifdef CONFIG_X86_PAT
extern int pat_wc_enabled;
+extern void validate_pat_support(struct cpuinfo_x86 *c);
+#else
+static const int pat_wc_enabled = 0;
+static inline void validate_pat_support(struct cpuinfo_x86 *c) { }
+#endif
extern void pat_init(void);
unsigned long req_type, unsigned long *ret_type);
extern int free_memtype(u64 start, u64 end);
+extern void pat_disable(char *reason);
+
#endif
*/
#ifdef CONFIG_X86_32
-typedef char _slock_t;
-# define LOCK_INS_DEC "decb"
-# define LOCK_INS_XCH "xchgb"
-# define LOCK_INS_MOV "movb"
-# define LOCK_INS_CMP "cmpb"
# define LOCK_PTR_REG "a"
#else
-typedef int _slock_t;
-# define LOCK_INS_DEC "decl"
-# define LOCK_INS_XCH "xchgl"
-# define LOCK_INS_MOV "movl"
-# define LOCK_INS_CMP "cmpl"
# define LOCK_PTR_REG "D"
#endif
#if (NR_CPUS < 256)
static inline int __raw_spin_is_locked(raw_spinlock_t *lock)
{
- int tmp = *(volatile signed int *)(&(lock)->slock);
+ int tmp = ACCESS_ONCE(lock->slock);
return (((tmp >> 8) & 0xff) != (tmp & 0xff));
}
static inline int __raw_spin_is_contended(raw_spinlock_t *lock)
{
- int tmp = *(volatile signed int *)(&(lock)->slock);
+ int tmp = ACCESS_ONCE(lock->slock);
return (((tmp >> 8) & 0xff) - (tmp & 0xff)) > 1;
}
#else
static inline int __raw_spin_is_locked(raw_spinlock_t *lock)
{
- int tmp = *(volatile signed int *)(&(lock)->slock);
+ int tmp = ACCESS_ONCE(lock->slock);
return (((tmp >> 16) & 0xffff) != (tmp & 0xffff));
}
static inline int __raw_spin_is_contended(raw_spinlock_t *lock)
{
- int tmp = *(volatile signed int *)(&(lock)->slock);
+ int tmp = ACCESS_ONCE(lock->slock);
return (((tmp >> 16) & 0xffff) - (tmp & 0xffff)) > 1;
}
#ifndef _ASM_X86_TOPOLOGY_H
#define _ASM_X86_TOPOLOGY_H
+#ifdef CONFIG_X86_32
+# ifdef CONFIG_X86_HT
+# define ENABLE_TOPO_DEFINES
+# endif
+#else
+# ifdef CONFIG_SMP
+# define ENABLE_TOPO_DEFINES
+# endif
+#endif
+
#ifdef CONFIG_NUMA
#include <linux/cpumask.h>
#include <asm/mpspec.h>
extern unsigned long node_remap_size[];
#define node_has_online_mem(nid) (node_start_pfn[nid] != node_end_pfn[nid])
-# ifdef CONFIG_X86_HT
-# define ENABLE_TOPO_DEFINES
-# endif
-
# define SD_CACHE_NICE_TRIES 1
# define SD_IDLE_IDX 1
# define SD_NEWIDLE_IDX 2
#else
-# ifdef CONFIG_SMP
-# define ENABLE_TOPO_DEFINES
-# endif
-
# define SD_CACHE_NICE_TRIES 2
# define SD_IDLE_IDX 2
# define SD_NEWIDLE_IDX 2
# define __section(S) __attribute__ ((__section__(#S)))
#endif
+/*
+ * Prevent the compiler from merging or refetching accesses. The compiler
+ * is also forbidden from reordering successive instances of ACCESS_ONCE(),
+ * but only when the compiler is aware of some particular ordering. One way
+ * to make the compiler aware of ordering is to put the two invocations of
+ * ACCESS_ONCE() in different C statements.
+ *
+ * This macro does absolutely -nothing- to prevent the CPU from reordering,
+ * merging, or refetching absolutely anything at any time.
+ */
+#define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x))
+
#endif /* __LINUX_COMPILER_H */
* 32 bit parent directory inode number.
*/
FILEID_INO32_GEN_PARENT = 2,
+
+ /*
+ * 32 bit block number, 16 bit partition reference,
+ * 16 bit unused, 32 bit generation number.
+ */
+ FILEID_UDF_WITHOUT_PARENT = 0x51,
+
+ /*
+ * 32 bit block number, 16 bit partition reference,
+ * 16 bit unused, 32 bit generation number,
+ * 32 bit parent block number, 32 bit parent generation number
+ */
+ FILEID_UDF_WITH_PARENT = 0x52,
};
struct fid {
u32 parent_ino;
u32 parent_gen;
} i32;
+ struct {
+ u32 block;
+ u16 partref;
+ u16 parent_partref;
+ u32 generation;
+ u32 parent_block;
+ u32 parent_generation;
+ } udf;
__u32 raw[0];
};
};
extern void clear_inode(struct inode *);
extern void destroy_inode(struct inode *);
extern struct inode *new_inode(struct super_block *);
-extern int __remove_suid(struct dentry *, int);
extern int should_remove_suid(struct dentry *);
extern int remove_suid(struct dentry *);
static inline void disk_stat_set_all(struct gendisk *gendiskp, int value) {
int i;
+
for_each_possible_cpu(i)
memset(per_cpu_ptr(gendiskp->dkstats, i), value,
- sizeof (struct disk_stats));
+ sizeof(struct disk_stats));
}
#define __part_stat_add(part, field, addnd) \
(per_cpu_ptr(part->dkstats, smp_processor_id())->field += addnd)
-#define __all_stat_add(gendiskp, field, addnd, sector) \
+#define __all_stat_add(gendiskp, part, field, addnd, sector) \
({ \
- struct hd_struct *part = get_part(gendiskp, sector); \
if (part) \
__part_stat_add(part, field, addnd); \
__disk_stat_add(gendiskp, field, addnd); \
res; \
})
-static inline void part_stat_set_all(struct hd_struct *part, int value) {
+static inline void part_stat_set_all(struct hd_struct *part, int value)
+{
int i;
+
for_each_possible_cpu(i)
memset(per_cpu_ptr(part->dkstats, i), value,
- sizeof(struct disk_stats));
+ sizeof(struct disk_stats));
}
#else /* !CONFIG_SMP */
#define __part_stat_add(part, field, addnd) \
(part->dkstats.field += addnd)
-#define __all_stat_add(gendiskp, field, addnd, sector) \
+#define __all_stat_add(gendiskp, part, field, addnd, sector) \
({ \
- struct hd_struct *part = get_part(gendiskp, sector); \
if (part) \
part->dkstats.field += addnd; \
__disk_stat_add(gendiskp, field, addnd); \
#define part_stat_sub(gendiskp, field, subnd) \
part_stat_add(gendiskp, field, -subnd)
-#define all_stat_add(gendiskp, field, addnd, sector) \
+#define all_stat_add(gendiskp, part, field, addnd, sector) \
do { \
preempt_disable(); \
- __all_stat_add(gendiskp, field, addnd, sector); \
+ __all_stat_add(gendiskp, part, field, addnd, sector); \
preempt_enable(); \
} while (0)
#define all_stat_dec(gendiskp, field, sector) \
all_stat_add(gendiskp, field, -1, sector)
-#define __all_stat_inc(gendiskp, field, sector) \
- __all_stat_add(gendiskp, field, 1, sector)
-#define all_stat_inc(gendiskp, field, sector) \
- all_stat_add(gendiskp, field, 1, sector)
+#define __all_stat_inc(gendiskp, part, field, sector) \
+ __all_stat_add(gendiskp, part, field, 1, sector)
+#define all_stat_inc(gendiskp, part, field, sector) \
+ all_stat_add(gendiskp, part, field, 1, sector)
-#define __all_stat_sub(gendiskp, field, subnd, sector) \
- __all_stat_add(gendiskp, field, -subnd, sector)
-#define all_stat_sub(gendiskp, field, subnd, sector) \
- all_stat_add(gendiskp, field, -subnd, sector)
+#define __all_stat_sub(gendiskp, part, field, subnd, sector) \
+ __all_stat_add(gendiskp, part, field, -subnd, sector)
+#define all_stat_sub(gendiskp, part, field, subnd, sector) \
+ all_stat_add(gendiskp, part, field, -subnd, sector)
/* Inlines to alloc and free disk stats in struct gendisk */
#ifdef CONFIG_SMP
#define in_softirq() (softirq_count())
#define in_interrupt() (irq_count())
+#if defined(CONFIG_PREEMPT)
+# define PREEMPT_INATOMIC_BASE kernel_locked()
+# define PREEMPT_CHECK_OFFSET 1
+#else
+# define PREEMPT_INATOMIC_BASE 0
+# define PREEMPT_CHECK_OFFSET 0
+#endif
+
/*
* Are we running in atomic context? WARNING: this macro cannot
* always detect atomic context; in particular, it cannot know about
* used in the general case to determine whether sleeping is possible.
* Do not use in_atomic() in driver code.
*/
-#define in_atomic() ((preempt_count() & ~PREEMPT_ACTIVE) != 0)
-
-#ifdef CONFIG_PREEMPT
-# define PREEMPT_CHECK_OFFSET 1
-#else
-# define PREEMPT_CHECK_OFFSET 0
-#endif
+#define in_atomic() ((preempt_count() & ~PREEMPT_ACTIVE) != PREEMPT_INATOMIC_BASE)
/*
* Check whether we were atomic before we did preempt_disable():
- * (used by the scheduler)
+ * (used by the scheduler, *after* releasing the kernel lock)
*/
#define in_atomic_preempt_off() \
((preempt_count() & ~PREEMPT_ACTIVE) != PREEMPT_CHECK_OFFSET)
* client handles for the extra addresses.
*/
extern struct i2c_client *
-i2c_new_dummy(struct i2c_adapter *adap, u16 address, const char *type);
+i2c_new_dummy(struct i2c_adapter *adap, u16 address);
extern void i2c_unregister_device(struct i2c_client *);
return (task_nice(task) + 20) / 5;
}
+/*
+ * This is for the case where the task hasn't asked for a specific IO class.
+ * Check for idle and rt task process, and return appropriate IO class.
+ */
+static inline int task_nice_ioclass(struct task_struct *task)
+{
+ if (task->policy == SCHED_IDLE)
+ return IOPRIO_CLASS_IDLE;
+ else if (task->policy == SCHED_FIFO || task->policy == SCHED_RR)
+ return IOPRIO_CLASS_RT;
+ else
+ return IOPRIO_CLASS_BE;
+}
+
/*
* For inheritance, return the highest of the two given priorities
*/
static inline int ata_check_ready(u8 status)
{
- /* Some controllers report 0x77 or 0x7f during intermediate
- * not-ready stages.
- */
- if (status == 0x77 || status == 0x7f)
- return 0;
+ if (!(status & ATA_BUSY))
+ return 1;
/* 0xff indicates either no device or device not ready */
if (status == 0xff)
return -ENODEV;
- return !(status & ATA_BUSY);
+ return 0;
}
/*
* MV-643XX ethernet platform device data definition file.
*/
+
#ifndef __LINUX_MV643XX_ETH_H
#define __LINUX_MV643XX_ETH_H
-#define MV643XX_ETH_SHARED_NAME "mv643xx_eth_shared"
-#define MV643XX_ETH_NAME "mv643xx_eth"
+#include <linux/mbus.h>
+
+#define MV643XX_ETH_SHARED_NAME "mv643xx_eth"
+#define MV643XX_ETH_NAME "mv643xx_eth_port"
#define MV643XX_ETH_SHARED_REGS 0x2000
#define MV643XX_ETH_SHARED_REGS_SIZE 0x2000
#define MV643XX_ETH_BAR_4 0x2220
#define MV643XX_ETH_SIZE_REG_4 0x2224
#define MV643XX_ETH_BASE_ADDR_ENABLE_REG 0x2290
+struct mv643xx_eth_shared_platform_data {
+ struct mbus_dram_target_info *dram;
+ unsigned int t_clk;
+};
+
struct mv643xx_eth_platform_data {
+ struct platform_device *shared;
int port_number;
+
+ struct platform_device *shared_smi;
+
u16 force_phy_addr; /* force override if phy_addr == 0 */
u16 phy_addr;
struct nf_ct_sip_master {
unsigned int register_cseq;
+ unsigned int invite_cseq;
};
enum sip_expectation_classes {
#include <linux/i2c.h>
-#ifdef CONFIG_OF_I2C
-
void of_register_i2c_devices(struct i2c_adapter *adap,
struct device_node *adap_node);
-#endif /* CONFIG_OF_I2C */
-
#endif /* __LINUX_OF_I2C_H */
void mdiobus_unregister(struct mii_bus *bus);
void phy_sanitize_settings(struct phy_device *phydev);
int phy_stop_interrupts(struct phy_device *phydev);
+int phy_enable_interrupts(struct phy_device *phydev);
+int phy_disable_interrupts(struct phy_device *phydev);
static inline int phy_read_status(struct phy_device *phydev) {
return phydev->drv->read_status(phydev);
int (*run)(struct phy_device *));
int phy_scan_fixups(struct phy_device *phydev);
+int __init mdio_bus_init(void);
+void mdio_bus_exit(void);
+
extern struct bus_type mdio_bus_type;
#endif /* __PHY_H */
*/
#define rcu_read_unlock_bh() __rcu_read_unlock_bh()
-/*
- * Prevent the compiler from merging or refetching accesses. The compiler
- * is also forbidden from reordering successive instances of ACCESS_ONCE(),
- * but only when the compiler is aware of some particular ordering. One way
- * to make the compiler aware of ordering is to put the two invocations of
- * ACCESS_ONCE() in different C statements.
- *
- * This macro does absolutely -nothing- to prevent the CPU from reordering,
- * merging, or refetching absolutely anything at any time.
- */
-#define ACCESS_ONCE(x) (*(volatile typeof(x) *)&(x))
-
/**
* rcu_dereference - fetch an RCU-protected pointer in an
* RCU read-side critical section. This pointer may later
* cond_resched_lock() will drop the spinlock before scheduling,
* cond_resched_softirq() will enable bhs before scheduling.
*/
+extern int _cond_resched(void);
#ifdef CONFIG_PREEMPT
static inline int cond_resched(void)
{
return 0;
}
#else
-extern int _cond_resched(void);
static inline int cond_resched(void)
{
return _cond_resched();
#endif
extern int cond_resched_lock(spinlock_t * lock);
extern int cond_resched_softirq(void);
+static inline int cond_resched_bkl(void)
+{
+ return _cond_resched();
+}
/*
* Does a critical section need to be broken due to another
#else
#define MODULE_VERMAGIC_MODULE_UNLOAD ""
#endif
+#ifdef CONFIG_MODVERSIONS
+#define MODULE_VERMAGIC_MODVERSIONS "modversions "
+#else
+#define MODULE_VERMAGIC_MODVERSIONS ""
+#endif
#ifndef MODULE_ARCH_VERMAGIC
#define MODULE_ARCH_VERMAGIC ""
#endif
#define VERMAGIC_STRING \
UTS_RELEASE " " \
MODULE_VERMAGIC_SMP MODULE_VERMAGIC_PREEMPT \
- MODULE_VERMAGIC_MODULE_UNLOAD MODULE_ARCH_VERMAGIC
+ MODULE_VERMAGIC_MODULE_UNLOAD MODULE_VERMAGIC_MODVERSIONS \
+ MODULE_ARCH_VERMAGIC
int (*resume)(struct i2c_client *client);
int (*legacy_probe)(struct i2c_adapter *adapter);
int legacy_class;
+ const struct i2c_device_id *id_table;
};
static struct v4l2_i2c_driver_data v4l2_i2c_data;
v4l2_i2c_driver.command = v4l2_i2c_data.command;
v4l2_i2c_driver.probe = v4l2_i2c_data.probe;
v4l2_i2c_driver.remove = v4l2_i2c_data.remove;
+ v4l2_i2c_driver.id_table = v4l2_i2c_data.id_table;
err = i2c_add_driver(&v4l2_i2c_driver);
if (err)
i2c_del_driver(&v4l2_i2c_driver_legacy);
int (*resume)(struct i2c_client *client);
int (*legacy_probe)(struct i2c_adapter *adapter);
int legacy_class;
+ const struct i2c_device_id *id_table;
};
static struct v4l2_i2c_driver_data v4l2_i2c_data;
v4l2_i2c_driver.remove = v4l2_i2c_data.remove;
v4l2_i2c_driver.suspend = v4l2_i2c_data.suspend;
v4l2_i2c_driver.resume = v4l2_i2c_data.resume;
+ v4l2_i2c_driver.id_table = v4l2_i2c_data.id_table;
return i2c_add_driver(&v4l2_i2c_driver);
}
help
Enable support for generating core dumps. Disabling saves about 4k.
+config PCSPKR_PLATFORM
+ bool "Enable PC-Speaker support" if EMBEDDED
+ depends on ALPHA || X86 || MIPS || PPC_PREP || PPC_CHRP || PPC_PSERIES
+ default y
+ help
+ This option allows to disable the internal PC-Speaker
+ support, saving some memory.
+
config COMPAT_BRK
bool "Disable heap randomization"
default y
depends on MODULES
default n
help
- This option allows loading of modules even if that would set the
- 'F' (forced) taint, due to lack of version info. Which is
- usually a really bad idea.
+ Allow loading of modules without version information (ie. modprobe
+ --force). Forced module loading sets the 'F' (forced) taint flag and
+ is usually a really bad idea.
config MODULE_UNLOAD
bool "Module unloading"
return task_cs(current) == cpuset_being_rebound;
}
-static int update_relax_domain_level(struct cpuset *cs, char *buf)
+static int update_relax_domain_level(struct cpuset *cs, s64 val)
{
- int val = simple_strtol(buf, NULL, 10);
-
- if (val < 0)
+ if ((int)val < 0)
val = -1;
if (val != cs->relax_domain_level) {
case FILE_MEMLIST:
retval = update_nodemask(cs, buffer);
break;
- case FILE_SCHED_RELAX_DOMAIN_LEVEL:
- retval = update_relax_domain_level(cs, buffer);
- break;
default:
retval = -EINVAL;
goto out2;
return retval;
}
+static int cpuset_write_s64(struct cgroup *cgrp, struct cftype *cft, s64 val)
+{
+ int retval = 0;
+ struct cpuset *cs = cgroup_cs(cgrp);
+ cpuset_filetype_t type = cft->private;
+
+ cgroup_lock();
+
+ if (cgroup_is_removed(cgrp)) {
+ cgroup_unlock();
+ return -ENODEV;
+ }
+ switch (type) {
+ case FILE_SCHED_RELAX_DOMAIN_LEVEL:
+ retval = update_relax_domain_level(cs, val);
+ break;
+ default:
+ retval = -EINVAL;
+ break;
+ }
+ cgroup_unlock();
+ return retval;
+}
+
/*
* These ascii lists should be read in a single call, by using a user
* buffer large enough to hold the entire map. If read in smaller
case FILE_MEMLIST:
s += cpuset_sprintf_memlist(s, cs);
break;
- case FILE_SCHED_RELAX_DOMAIN_LEVEL:
- s += sprintf(s, "%d", cs->relax_domain_level);
- break;
default:
retval = -EINVAL;
goto out;
}
}
+static s64 cpuset_read_s64(struct cgroup *cont, struct cftype *cft)
+{
+ struct cpuset *cs = cgroup_cs(cont);
+ cpuset_filetype_t type = cft->private;
+ switch (type) {
+ case FILE_SCHED_RELAX_DOMAIN_LEVEL:
+ return cs->relax_domain_level;
+ default:
+ BUG();
+ }
+}
+
/*
* for the common functions, 'private' gives the type of file
{
.name = "sched_relax_domain_level",
- .read_u64 = cpuset_read_u64,
- .write_u64 = cpuset_write_u64,
+ .read_s64 = cpuset_read_s64,
+ .write_s64 = cpuset_write_s64,
.private = FILE_SCHED_RELAX_DOMAIN_LEVEL,
},
if (!crc)
return 1;
+ /* No versions at all? modprobe --force does this. */
+ if (versindex == 0)
+ return try_to_force_load(mod, symname) == 0;
+
versions = (void *) sechdrs[versindex].sh_addr;
num_versions = sechdrs[versindex].sh_size
/ sizeof(struct modversion_info);
goto bad_version;
}
- if (!try_to_force_load(mod, symname))
- return 1;
+ printk(KERN_WARNING "%s: no symbol version for %s\n",
+ mod->name, symname);
+ return 0;
bad_version:
printk("%s: disagrees about version of symbol %s\n",
return check_version(sechdrs, versindex, "struct_module", mod, crc);
}
-/* First part is kernel version, which we ignore. */
-static inline int same_magic(const char *amagic, const char *bmagic)
+/* First part is kernel version, which we ignore if module has crcs. */
+static inline int same_magic(const char *amagic, const char *bmagic,
+ bool has_crcs)
{
- amagic += strcspn(amagic, " ");
- bmagic += strcspn(bmagic, " ");
+ if (has_crcs) {
+ amagic += strcspn(amagic, " ");
+ bmagic += strcspn(bmagic, " ");
+ }
return strcmp(amagic, bmagic) == 0;
}
#else
return 1;
}
-static inline int same_magic(const char *amagic, const char *bmagic)
+static inline int same_magic(const char *amagic, const char *bmagic,
+ bool has_crcs)
{
return strcmp(amagic, bmagic) == 0;
}
err = try_to_force_load(mod, "magic");
if (err)
goto free_hdr;
- } else if (!same_magic(modmagic, vermagic)) {
+ } else if (!same_magic(modmagic, vermagic, versindex)) {
printk(KERN_ERR "%s: version magic '%s' should be '%s'\n",
mod->name, modmagic, vermagic);
err = -ENOEXEC;
ret = 0;
spliced = 0;
- while (len && !spliced) {
+ while (len) {
ret = subbuf_splice_actor(in, ppos, pipe, len, flags, &nonpad_ret);
if (ret < 0)
break;
asmlinkage void __sched preempt_schedule(void)
{
struct thread_info *ti = current_thread_info();
- struct task_struct *task = current;
- int saved_lock_depth;
/*
* If there is a non-zero preempt_count or interrupts are disabled,
do {
add_preempt_count(PREEMPT_ACTIVE);
-
- /*
- * We keep the big kernel semaphore locked, but we
- * clear ->lock_depth so that schedule() doesnt
- * auto-release the semaphore:
- */
- saved_lock_depth = task->lock_depth;
- task->lock_depth = -1;
schedule();
- task->lock_depth = saved_lock_depth;
sub_preempt_count(PREEMPT_ACTIVE);
/*
asmlinkage void __sched preempt_schedule_irq(void)
{
struct thread_info *ti = current_thread_info();
- struct task_struct *task = current;
- int saved_lock_depth;
/* Catch callers which need to be fixed */
BUG_ON(ti->preempt_count || !irqs_disabled());
do {
add_preempt_count(PREEMPT_ACTIVE);
-
- /*
- * We keep the big kernel semaphore locked, but we
- * clear ->lock_depth so that schedule() doesnt
- * auto-release the semaphore:
- */
- saved_lock_depth = task->lock_depth;
- task->lock_depth = -1;
local_irq_enable();
schedule();
local_irq_disable();
- task->lock_depth = saved_lock_depth;
sub_preempt_count(PREEMPT_ACTIVE);
/*
} while (need_resched());
}
-#if !defined(CONFIG_PREEMPT) || defined(CONFIG_PREEMPT_VOLUNTARY)
int __sched _cond_resched(void)
{
if (need_resched() && !(preempt_count() & PREEMPT_ACTIVE) &&
return 0;
}
EXPORT_SYMBOL(_cond_resched);
-#endif
/*
* cond_resched_lock() - if a reschedule is pending, drop the given lock,
spin_unlock_irqrestore(&rq->lock, flags);
/* Set the preempt count _outside_ the spinlocks! */
+#if defined(CONFIG_PREEMPT)
+ task_thread_info(idle)->preempt_count = (idle->lock_depth >= 0);
+#else
task_thread_info(idle)->preempt_count = 0;
-
+#endif
/*
* The idle tasks have their own, simple scheduling class:
*/
if (!initial) {
/* sleeps upto a single latency don't count. */
if (sched_feat(NEW_FAIR_SLEEPERS)) {
+ unsigned long thresh = sysctl_sched_latency;
+
+ /*
+ * convert the sleeper threshold into virtual time
+ */
if (sched_feat(NORMALIZED_SLEEPER))
- vruntime -= calc_delta_weight(sysctl_sched_latency, se);
- else
- vruntime -= sysctl_sched_latency;
+ thresh = calc_delta_fair(thresh, se);
+
+ vruntime -= thresh;
}
/* ensure we never gain time by being placed backwards. */
#include <linux/semaphore.h>
/*
- * The 'big kernel semaphore'
+ * The 'big kernel lock'
*
- * This mutex is taken and released recursively by lock_kernel()
+ * This spinlock is taken and released recursively by lock_kernel()
* and unlock_kernel(). It is transparently dropped and reacquired
* over schedule(). It is used to protect legacy code that hasn't
* been migrated to a proper locking design yet.
*
- * Note: code locked by this semaphore will only be serialized against
- * other code using the same locking facility. The code guarantees that
- * the task remains on the same CPU.
- *
* Don't use in new code.
*/
-static DECLARE_MUTEX(kernel_sem);
+static __cacheline_aligned_in_smp DEFINE_SPINLOCK(kernel_flag);
+
/*
- * Re-acquire the kernel semaphore.
+ * Acquire/release the underlying lock from the scheduler.
*
- * This function is called with preemption off.
+ * This is called with preemption disabled, and should
+ * return an error value if it cannot get the lock and
+ * TIF_NEED_RESCHED gets set.
*
- * We are executing in schedule() so the code must be extremely careful
- * about recursion, both due to the down() and due to the enabling of
- * preemption. schedule() will re-check the preemption flag after
- * reacquiring the semaphore.
+ * If it successfully gets the lock, it should increment
+ * the preemption count like any spinlock does.
+ *
+ * (This works on UP too - _raw_spin_trylock will never
+ * return false in that case)
*/
int __lockfunc __reacquire_kernel_lock(void)
{
- struct task_struct *task = current;
- int saved_lock_depth = task->lock_depth;
-
- BUG_ON(saved_lock_depth < 0);
-
- task->lock_depth = -1;
- preempt_enable_no_resched();
-
- down(&kernel_sem);
-
+ while (!_raw_spin_trylock(&kernel_flag)) {
+ if (test_thread_flag(TIF_NEED_RESCHED))
+ return -EAGAIN;
+ cpu_relax();
+ }
preempt_disable();
- task->lock_depth = saved_lock_depth;
-
return 0;
}
void __lockfunc __release_kernel_lock(void)
{
- up(&kernel_sem);
+ _raw_spin_unlock(&kernel_flag);
+ preempt_enable_no_resched();
}
/*
- * Getting the big kernel semaphore.
+ * These are the BKL spinlocks - we try to be polite about preemption.
+ * If SMP is not on (ie UP preemption), this all goes away because the
+ * _raw_spin_trylock() will always succeed.
*/
-void __lockfunc lock_kernel(void)
+#ifdef CONFIG_PREEMPT
+static inline void __lock_kernel(void)
{
- struct task_struct *task = current;
- int depth = task->lock_depth + 1;
+ preempt_disable();
+ if (unlikely(!_raw_spin_trylock(&kernel_flag))) {
+ /*
+ * If preemption was disabled even before this
+ * was called, there's nothing we can be polite
+ * about - just spin.
+ */
+ if (preempt_count() > 1) {
+ _raw_spin_lock(&kernel_flag);
+ return;
+ }
- if (likely(!depth))
/*
- * No recursion worries - we set up lock_depth _after_
+ * Otherwise, let's wait for the kernel lock
+ * with preemption enabled..
*/
- down(&kernel_sem);
+ do {
+ preempt_enable();
+ while (spin_is_locked(&kernel_flag))
+ cpu_relax();
+ preempt_disable();
+ } while (!_raw_spin_trylock(&kernel_flag));
+ }
+}
- task->lock_depth = depth;
+#else
+
+/*
+ * Non-preemption case - just get the spinlock
+ */
+static inline void __lock_kernel(void)
+{
+ _raw_spin_lock(&kernel_flag);
}
+#endif
-void __lockfunc unlock_kernel(void)
+static inline void __unlock_kernel(void)
{
- struct task_struct *task = current;
+ /*
+ * the BKL is not covered by lockdep, so we open-code the
+ * unlocking sequence (and thus avoid the dep-chain ops):
+ */
+ _raw_spin_unlock(&kernel_flag);
+ preempt_enable();
+}
- BUG_ON(task->lock_depth < 0);
+/*
+ * Getting the big kernel lock.
+ *
+ * This cannot happen asynchronously, so we only need to
+ * worry about other CPU's.
+ */
+void __lockfunc lock_kernel(void)
+{
+ int depth = current->lock_depth+1;
+ if (likely(!depth))
+ __lock_kernel();
+ current->lock_depth = depth;
+}
- if (likely(--task->lock_depth < 0))
- up(&kernel_sem);
+void __lockfunc unlock_kernel(void)
+{
+ BUG_ON(current->lock_depth < 0);
+ if (likely(--current->lock_depth < 0))
+ __unlock_kernel();
}
EXPORT_SYMBOL(lock_kernel);
}
EXPORT_SYMBOL(should_remove_suid);
-int __remove_suid(struct dentry *dentry, int kill)
+static int __remove_suid(struct dentry *dentry, int kill)
{
struct iattr newattrs;
if (!n)
continue;
- if (atomic_read(&n->total_objects))
+ if (atomic_long_read(&n->total_objects))
return 1;
}
return 0;
*/
int can_send(struct sk_buff *skb, int loop)
{
+ struct sk_buff *newskb = NULL;
int err;
if (skb->dev->type != ARPHRD_CAN) {
* If the interface is not capable to do loopback
* itself, we do it here.
*/
- struct sk_buff *newskb = skb_clone(skb, GFP_ATOMIC);
-
+ newskb = skb_clone(skb, GFP_ATOMIC);
if (!newskb) {
kfree_skb(skb);
return -ENOMEM;
newskb->sk = skb->sk;
newskb->ip_summed = CHECKSUM_UNNECESSARY;
newskb->pkt_type = PACKET_BROADCAST;
- netif_rx(newskb);
}
} else {
/* indication for the CAN driver: no loopback required */
if (err > 0)
err = net_xmit_errno(err);
+ if (err) {
+ if (newskb)
+ kfree_skb(newskb);
+ return err;
+ }
+
+ if (newskb)
+ netif_rx(newskb);
+
/* update statistics */
can_stats.tx_frames++;
can_stats.tx_frames_delta++;
- return err;
+ return 0;
}
EXPORT_SYMBOL(can_send);
{
int ret = 0;
+ ASSERT_RTNL();
+
/*
* Is it already up?
*/
*/
int dev_close(struct net_device *dev)
{
+ ASSERT_RTNL();
+
might_sleep();
if (!(dev->flags & IFF_UP))
rtnl_lock();
for_each_netdev_safe(net, dev, next) {
int err;
+ char fb_name[IFNAMSIZ];
/* Ignore unmoveable devices (i.e. loopback) */
if (dev->features & NETIF_F_NETNS_LOCAL)
continue;
/* Push remaing network devices to init_net */
- err = dev_change_net_namespace(dev, &init_net, "dev%d");
+ snprintf(fb_name, IFNAMSIZ, "dev%d", dev->ifindex);
+ err = dev_change_net_namespace(dev, &init_net, fb_name);
if (err) {
- printk(KERN_WARNING "%s: failed to move %s to init_net: %d\n",
+ printk(KERN_EMERG "%s: failed to move %s to init_net: %d\n",
__func__, dev->name, err);
- unregister_netdevice(dev);
+ BUG();
}
}
rtnl_unlock();
iph = ip_hdr(skb);
/*
- * RFC1122: 3.1.2.2 MUST silently discard any IP frame that fails the checksum.
+ * RFC1122: 3.2.1.2 MUST silently discard any IP frame that fails the checksum.
*
* Is the datagram acceptable?
*
#define FLAG_FORWARD_PROGRESS (FLAG_ACKED|FLAG_DATA_SACKED)
#define FLAG_ANY_PROGRESS (FLAG_FORWARD_PROGRESS|FLAG_SND_UNA_ADVANCED)
-#define IsSackFrto() (sysctl_tcp_frto == 0x2)
-
#define TCP_REMNANT (TCP_FLAG_FIN|TCP_FLAG_URG|TCP_FLAG_SYN|TCP_FLAG_PSH)
#define TCP_HP_BITS (~(TCP_RESERVED_BITS|TCP_FLAG_PSH))
tp->sacked_out = 0;
}
+static int tcp_is_sackfrto(const struct tcp_sock *tp)
+{
+ return (sysctl_tcp_frto == 0x2) && !tcp_is_reno(tp);
+}
+
/* F-RTO can only be used if TCP has never retransmitted anything other than
* head (SACK enhanced variant from Appendix B of RFC4138 is more robust here)
*/
if (icsk->icsk_mtup.probe_size)
return 0;
- if (IsSackFrto())
+ if (tcp_is_sackfrto(tp))
return 1;
/* Avoid expensive walking of rexmit queue if possible */
/* Earlier loss recovery underway (see RFC4138; Appendix B).
* The last condition is necessary at least in tp->frto_counter case.
*/
- if (IsSackFrto() && (tp->frto_counter ||
+ if (tcp_is_sackfrto(tp) && (tp->frto_counter ||
((1 << icsk->icsk_ca_state) & (TCPF_CA_Recovery|TCPF_CA_Loss))) &&
after(tp->high_seq, tp->snd_una)) {
tp->frto_highmark = tp->high_seq;
return 1;
}
- if (!IsSackFrto() || tcp_is_reno(tp)) {
+ if (!tcp_is_sackfrto(tp)) {
/* RFC4138 shortcoming in step 2; should also have case c):
* ACK isn't duplicate nor advances window, e.g., opposite dir
* data, winupdate
}
icmp_send(skb, ICMP_DEST_UNREACH, ICMP_PORT_UNREACH, 0);
- kfree_skb(skb);
read_unlock(&ipip6_lock);
out:
+ kfree_skb(skb);
return 0;
}
struct rc_pid_sta_info *sinfo = inode->i_private;
struct rc_pid_event_buffer *events = &sinfo->events;
struct rc_pid_events_file_info *file_info;
- unsigned int status;
+ unsigned long status;
/* Allocate a state struct */
file_info = kmalloc(sizeof(*file_info), GFP_KERNEL);
char pb[RC_PID_PRINT_BUF_SIZE];
int ret;
int p;
- unsigned int status;
+ unsigned long status;
/* Check if there is something to read. */
if (events->next_entry == file_info->next_entry) {
tristate 'DCCP protocol connection tracking support (EXPERIMENTAL)'
depends on EXPERIMENTAL && NF_CONNTRACK
depends on NETFILTER_ADVANCED
+ default IP_DCCP
help
With this option enabled, the layer 3 independent connection
tracking code will be able to do state tracking on DCCP connections.
tristate 'SCTP protocol connection tracking support (EXPERIMENTAL)'
depends on EXPERIMENTAL && NF_CONNTRACK
depends on NETFILTER_ADVANCED
+ default IP_SCTP
help
With this option enabled, the layer 3 independent connection
tracking code will be able to do state tracking on SCTP connections.
tristate '"dccp" protocol match support'
depends on NETFILTER_XTABLES
depends on NETFILTER_ADVANCED
+ default IP_DCCP
help
With this option enabled, you will be able to use the iptables
`dccp' match in order to match on DCCP source/destination ports
tristate '"sctp" protocol match support (EXPERIMENTAL)'
depends on NETFILTER_XTABLES && EXPERIMENTAL
depends on NETFILTER_ADVANCED
+ default IP_SCTP
help
With this option enabled, you will be able to use the
`sctp' match in order to match on SCTP source/destination ports
{
enum ip_conntrack_info ctinfo;
struct nf_conn *ct = nf_ct_get(skb, &ctinfo);
+ struct nf_conn_help *help = nfct_help(ct);
unsigned int matchoff, matchlen;
unsigned int mediaoff, medialen;
unsigned int sdpoff;
if (nf_nat_sdp_session && ct->status & IPS_NAT_MASK)
ret = nf_nat_sdp_session(skb, dptr, sdpoff, datalen, &rtp_addr);
+ if (ret == NF_ACCEPT && i > 0)
+ help->help.ct_sip_info.invite_cseq = cseq;
+
return ret;
}
static int process_invite_response(struct sk_buff *skb,
{
enum ip_conntrack_info ctinfo;
struct nf_conn *ct = nf_ct_get(skb, &ctinfo);
+ struct nf_conn_help *help = nfct_help(ct);
if ((code >= 100 && code <= 199) ||
(code >= 200 && code <= 299))
return process_sdp(skb, dptr, datalen, cseq);
- else {
+ else if (help->help.ct_sip_info.invite_cseq == cseq)
flush_expectations(ct, true);
- return NF_ACCEPT;
- }
+ return NF_ACCEPT;
}
static int process_update_response(struct sk_buff *skb,
{
enum ip_conntrack_info ctinfo;
struct nf_conn *ct = nf_ct_get(skb, &ctinfo);
+ struct nf_conn_help *help = nfct_help(ct);
if ((code >= 100 && code <= 199) ||
(code >= 200 && code <= 299))
return process_sdp(skb, dptr, datalen, cseq);
- else {
+ else if (help->help.ct_sip_info.invite_cseq == cseq)
flush_expectations(ct, true);
- return NF_ACCEPT;
- }
+ return NF_ACCEPT;
}
static int process_prack_response(struct sk_buff *skb,
{
enum ip_conntrack_info ctinfo;
struct nf_conn *ct = nf_ct_get(skb, &ctinfo);
+ struct nf_conn_help *help = nfct_help(ct);
if ((code >= 100 && code <= 199) ||
(code >= 200 && code <= 299))
return process_sdp(skb, dptr, datalen, cseq);
- else {
+ else if (help->help.ct_sip_info.invite_cseq == cseq)
flush_expectations(ct, true);
- return NF_ACCEPT;
- }
+ return NF_ACCEPT;
}
static int process_bye_request(struct sk_buff *skb,
#include <linux/mm.h>
#include <linux/interrupt.h>
#include <linux/module.h>
-#include <linux/sched.h>
#include <linux/sunrpc/types.h>
#include <linux/sunrpc/xdr.h>
/*
* TIPC message buffer code
*
- * TIPC message buffer headroom reserves space for a link-level header
- * (in case the message is sent off-node),
- * while ensuring TIPC header is word aligned for quicker access
+ * TIPC message buffer headroom reserves space for the worst-case
+ * link-level device header (in case the message is sent off-node).
*
- * The largest header currently supported is 18 bytes, which is used when
- * the standard 14 byte Ethernet header has 4 added bytes for VLAN info
+ * Note: Headroom should be a multiple of 4 to ensure the TIPC header fields
+ * are word aligned for quicker access
*/
-#define BUF_HEADROOM 20u
+#define BUF_HEADROOM LL_MAX_HEADER
struct tipc_skb_cb {
void *handle;
config SND_PCSP
- tristate "Internal PC speaker support"
- depends on X86_PC && HIGH_RES_TIMERS
+ tristate "PC-Speaker support"
+ depends on PCSPKR_PLATFORM && X86_PC && HIGH_RES_TIMERS
depends on INPUT
depends on SND
select SND_PCM
return 1;
mem = ioremap(base, 128);
- if(mem == 0UL)
+ if (!mem)
return 1;
map = readw(mem + 0x18); /* Read the SMI enables */
iounmap(mem);
if (prtd->period_ptr >= prtd->dma_buffer_end) {
prtd->period_ptr = prtd->dma_buffer;
}
- at91_ssc_write(params->ssc_base + params->pdc->xnpr, prtd->period_ptr);
+ at91_ssc_write(params->ssc_base + params->pdc->xnpr,
+ prtd->period_ptr);
at91_ssc_write(params->ssc_base + params->pdc->xncr,
prtd->period_size / params->pdc_xfer_size);
}
at91_ssc_write(params->ssc_base + AT91_SSC_IER,
params->mask->ssc_endx | params->mask->ssc_endbuf);
- at91_ssc_write(params->ssc_base + ATMEL_PDC_PTCR, params->mask->pdc_enable);
+ at91_ssc_write(params->ssc_base + ATMEL_PDC_PTCR,
+ params->mask->pdc_enable);
- DBG("sr=%lx imr=%lx\n", at91_ssc_read(params->ssc_base + AT91_SSC_SR),
- at91_ssc_read(params->ssc_base + AT91_SSC_IER));
+ DBG("sr=%lx imr=%lx\n",
+ at91_ssc_read(params->ssc_base + AT91_SSC_SR),
+ at91_ssc_read(params->ssc_base + AT91_SSC_IMR));
break;
case SNDRV_PCM_TRIGGER_STOP:
printk(KERN_WARNING "at91-ssc: request_irq failure\n");
DBG("Stopping pid %d clock\n", ssc_p->ssc.pid);
- at91_sys_write(AT91_PMC_PCER, 1<<ssc_p->ssc.pid);
+ at91_sys_write(AT91_PMC_PCDR, 1<<ssc_p->ssc.pid);
return ret;
}